You might want to double-think about getting news from Google Gemini

    By Manisha Priyadarshini
Published October 23, 2025

What’s happened? A major study led by the European Broadcasting Union (EBU) in coordination with the BBC has revealed serious flaws in how popular AI assistants handle news-related queries, with Google’s Gemini standing out as the worst performer overall.

This is important because: If you’re turning to an AI assistant for news, these findings matter, especially when one model fares significantly worse than the rest.

Why should I care? You might already be using an AI assistant to catch up on the news, but if that assistant happens to be Gemini, this study suggests you are at a bigger risk of misinformation.

The bottom line is that AI assistants can help you stay informed, but they should not be your only source of truth.

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20