AI chatbot company WiseOx has launched a new AI tool to help people communicate more effectively with AI. Named Pronto, this “AI Mascot” is specifically trained in prompt writing — in other words, it’s here to tell you what to say to other AIs so you can (hopefully) get the results you want.

We all know what coding is — it’s the way we give instructions to a computer so it can understand and produce the desired results. With large language models (LLMs), we can give our instructions in natural human language, but it turns out there are still effective and ineffective ways to do this.

If your instructions are unclear or you skip steps that would be obvious to human listeners, the LLM you’re using could easily get confused and give you output that you don’t want. The reason for this is pretty simple — here it is in Pronto’s own words:

Yes, ChatGPT and other generative AI can’t really understand what you’re saying. They don’t understand what they’re saying either. It’s all just smoke and mirrors, and if you want to maximize the chances of the AI being precise and accurate, you have to be precise and accurate yourself.

This is where “prompt engineering” comes in. By utilizing clear and simple language, concise instructions, and breaking down complex tasks into simpler steps, you can consistently get better results. Things will still go wrong sometimes, but overall, it should improve your experience.

If your prompt game isn’t that strong, you can paste your attempt into Pronto and ask for an improved version. The tool is new right now so we’ll have to wait to see how well its improved prompts actually perform, but if it does a decent job, it could speed things up for people who like to use LLMs but often find themselves having to revise their prompts.

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20