- October 14, 2025
Trusting chatbots for news? Be careful

Like many of you, I’m increasingly using ChatGPT and other chatbots as my personal oracles — asking them everything from the intricacies of the Gaza peace talks to what’s the latest in Miami. So I was surprised to read a new study showing that, instead of getting smarter and more accurate, they’re actually pumping out more false responses on current events than they did a year ago.
The explosive report comes from NewsGuard, the respected information reliability firm. Their audit of the 10 leading chatbots showed they repeated false information about controversial news events at nearly double the rate of a year ago.
While some chatbots performed better than others (I’ll get to the winners and losers in a minute), the overall picture is alarming: The 10 leading chatbots spread false responses on news topics an average of 35% of the time in the year ending in August 2025, up from 18% in the previous year.
At first, the study didn’t make much sense to me: In theory, these artificial intelligence (AI) assistants should be getting smarter and more accurate every day, as they learn from their past mistakes.
But McKenzie Sadeghi, NewsGuard’s Editor for AI and Foreign Influence, explained to me that chatbots are increasingly unreliable on real-time news events in part because they have become averse to saying, “I don’t know.”
“You may remember that when you would ask a chatbot about an election or about the assassination attempt against President Trump, they would say something like, ‘I was trained up to June 2024 and therefore can’t answer queries about topics after that date,’” she told me.
But that has changed now. Within the past year, in the rush to be helpful, almost all leading chatbots have begun doing real-time web searches that include social media. That’s “leading them to pull information from a very polluted information ecosystem,” Sadeghi said.
On top of that, Russia, Iran and other countries like China are now flooding the internet with AI-generated fake news sites. They spread falsehoods created by massive content farms, specifically designed to create a critical mass of false information that influences what the chatbots say, she added.
“Both Russia and Iran have created hundreds of fake news websites posing as local news outlets in an attempt to sway voters and manipulate public opinion,” Sadeghi said. “And chatbots increasingly rely on these content farms.”
According to the study, the Claude chatbot produced false claims in its responses on news events 10% of the time, Gemini 17%, ChatGPT and Meta 40% each, and Pi 57%.
There has been no immediate reaction from the major technology companies. Industry sources told me that chatbots are a work in progress. They argue that just as they are reducing the rate of “hallucinations” — or bizarre responses — on topics like health, they will soon figure out how to manage real-time news.
They should — and soon — because the frontier between reality and fake news is blurring by the hour.
On Sept. 30, for instance, OpenAI, which owns ChatGPT, unveiled a new AI platform that allows you to create hyper-realistic videos of almost anything you wish. Experts warn that the new AI platform, named Sora 2, can produce fake videos that are nearly impossible for the human eye to distinguish from real ones.
The companies have put up guard rails: Sora 2 and other video platforms like Google’s Veo 3 do not allow you to create fake videos involving public figures, or that incite violence. They also apply a visible watermark indicating AI generation. But there have already been reports that these watermarks can be erased, and that Sora 2 has generated fake scenes of a man stuffing a ballot box or of people committing non-existent crimes.
We’re entering, at lightning speed, a dangerous world where we can no longer tell what’s real and what isn’t. This is a fertile ground for autocrats, who love to further muddy the waters, get people to believe that “all politicians are the same,” blur the lines between right and wrong and — ultimately — get people to abandon the defense of basic values such as democracy and human rights.
As the late journalist Bill Moyers once warned, “Truth is the oxygen of the air of democracy.”
What should we do? As individuals, we have to become our own fact-checkers, refusing to spread any story without first verifying that it comes from a credible source. As a nation, we must support world efforts to finally regulate technology companies, in cooperation with them, to stop the avalanche of falsehoods.
Otherwise, the false claims by today’s hyper-confident chatbots and the wave of hyper-realistic fake videos will continue to exacerbate polarization, threaten real-world violence and further undermine our threatened democracy.