- Nearly half of all AI assistant answers about the news contain major errors, a major international study has found
- The factual, sourcing, or contextual issues were apparent across 14 languages and 18 countries
- Gemini fared the worst, with double the rate of significant problems compared to competitorsfor sure,
When you ask an AI assistant about news and current events you might expect a detached, authoritative answer. But according to a sweeping international study led by the BBC and coordinated by the European Broadcasting Union (EBU), nearly half the time, those answers are wrong, misleading, or just plain made up (anyone who’s dealt with the nonsense of Apple’s AI-written headlines can relate).
The report dug into how ChatGPT, Microsoft Copilot, Google Gemini, and Perplexity handle news queries across 14 languages in 18 countries. The report analyzed over 3,000 individual responses provided by the AI tools. Professional journalists from 22 public media outlets evaluated each answer for accuracy, sourcing, and how well it discerned news from opinion.
The results were bleak for those relying on AI for their news. The report found that 45% of all answers had a significant issue, 31% had sourcing problems, and 20% were simply inaccurate. This isn’t just a matter of one or two embarrassing mistakes, like confusing the Prime Minister of Belgium with the frontman of a Belgian pop group. The research found deep, structural issues with how these assistants process and deliver news, regardless of language, country, or platform.
In some languages, the assistants outright hallucinated details. In others, they attributed quotes to outlets that hadn’t published anything even close to what was being cited. Context was often missing, with the assistants sometimes giving simplistic or misleading overviews instead of crucial nuance. In the worst cases, that could change the meaning of an entire news story.
Not every assistant was equally problematic. Gemini misfired in a staggering 76% of responses, mostly due to missing or poor sourcing.
Unlike a Google search, which lets users sift through a dozen sources, a chatbot‘s answer often feels final. It reads with authority and clarity, giving the impression that it’s been fact-checked and edited, when in fact it may be little more than a fuzzy collage of half-remembered summaries.
That’s part of why the stakes are so high. And why even partnerships like those between ChatGPT and The Washington Post can’t solve the problem entirely.
AI news literacy
The problem is obvious, especially given how quickly AI assistants are becoming the go-to interface for news. The study cited the 2025 Reuters Institute’s Digital News Report estimate that 7% of all online news consumers now use an AI assistant to get their information, and 15% of those under 25. People are already asking AI to explain the world to them, and the AI is getting the world wrong a disturbing amount.
If you’ve ever asked ChatGPT, Gemini, or Copilot to summarize a news event, you’ve probably seen one of these imperfect answers in action. ChatGPT’s difficulties with searching for the news are well known at this point. But maybe you didn’t even notice. That’s part of the problem: these tools are often wrong with such fluency that it doesn’t feel like a red flag. That’s why media literacy and ongoing scrutiny are essential.
To try to improve the situation, the EBU and its partners released a “News Integrity in AI Assistants Toolkit,” which serves as an AI literacy starter pack designed to help developers and journalists alike. It outlines both what makes a good AI response and what kinds of failures users and media watchdogs should be looking for.
Even as companies like OpenAI and Google race ahead with faster, slicker versions of their assistants, these reports show why transparency and accountability are so important. That doesn’t mean AI can’t be helpful, even for curating the endless firehose of news. It does mean that, for now, it should come with a disclaimer. And even if it doesn’t, don’t assume the assistant knows best – check your sources, and stick to the most reliable ones, like TechRadar.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.
You might also like…
https://cdn.mos.cms.futurecdn.net/75fYsEMrGFSzgVdh6CEMbV-1920-80.jpg
Source link
erichs211@gmail.com (Eric Hal Schwartz)