‘You can’t lick a badger twice’: How Google’s AI Overview hallucinates idioms

    By Georgina Torbet
Published April 24, 2025

The latest AI trend is a funny one, as a user has discovered that you can plug a made-up phrase into Google and append it with “meaning,” then Google’s AI Overview feature will hallucinate a meaning for the phrase.

Historian Greg Jenner kicked off the trend with a post on Bluesky in which he asked Google to explain the meaning of “You can’t lick a badger twice.” AI Overview helpfully explained that this expression means that you can’t deceive someone a second time after they’ve already been tricked once — which seems like a reasonable explanation, but ignores the fact that this idiom didn’t exist before this query went viral.

Since then, people have been having a lot of fun getting AI Overview to explain idioms like “A squid in a vase will speak no ill” (meaning that something outside of its natural environment will be unable to cause harm, apparently) or “You can take your dog to the beach but you can’t sail it to Switzerland” (which is, according to AI Overview, a fairly straightforward phrase about the difficulty of international travel with pets).

It doesn’t work for all cases though, as some phrases don’t return AI Overview results. “It’s wildly inconsistent,” cognitive scientist Gary Marcus said to Wired, “and that’s what you expect of GenAI.”

Jenner points out that as entertaining as this is, it does indicate some of the pitfalls of relying too heavily on AI generated sources like AI Overview for information. “It’s a warning sign that one of the key functions of Googling – the ability to factcheck a quote, verify a source, or track down something half remembered – will get so much harder if AI prefers to legitimate statistical possibilities over actual truth,” Jenner wrote.

This isn’t the first time that people have pointed out the limitations of information provided by AI, and AI Overview in particular. When AI Overview was launched, it infamously suggested that people should eat one small rock per day and that they could put glue on their pizza, though these particular answers were quickly removed.

Since then, Google has said in a statement to Digital Trends that the majority of AI Overviews provide helpful and factual information, and that it was still gathering feedback on its AI product.

For now, though, let this serve as a reminder to double check the information which appears in the AI Overview box at the top of Google results, as it may not be accurate.

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20