AI Chatbots Now Get The News Wrong 1 Out Of 3 Times
A recent audit by fact-checking service NewsGuard has revealed a disturbing trend among leading generative AI systems: they now repeat false news claims 35% of the time, up from 18% in 2024. This alarming drop in accuracy rates raises serious concerns about the reliability of AI-powered chatbots and their potential impact on society.
According to McKenzie Sadeghi, NewsGuard spokesperson, "The drive for instant responses from chatbots has revealed their fundamental weakness because they now draw information from an internet space that contains poor content and artificial news and deceptive advertising." Instead of acknowledging limitations or declining to weigh in on sensitive topics, the models are now pulling from a polluted online ecosystem.
NewsGuard's audit, conducted in August 2025, analyzed data from top AI chatbots and found that they struggle to distinguish truth from falsehood in real-time. This is largely due to their reliance on internet sources that often contain misinformation or biased content. "The result is a breakdown in the trust between users and these systems," Sadeghi added.
Background research suggests that AI chatbots have become increasingly popular in recent years, with many companies incorporating them into their customer service platforms. However, this rapid adoption has not been accompanied by sufficient investment in fact-checking and content verification processes.
Experts warn that the consequences of inaccurate AI-powered information can be far-reaching. "The spread of misinformation through AI systems can exacerbate social divisions, undermine trust in institutions, and even contribute to real-world harm," said Dr. Rachel Kim, a leading researcher on AI ethics.
To address these concerns, NewsGuard is calling for greater transparency and accountability from AI developers. Sadeghi emphasized that "AI companies must prioritize fact-checking and content verification processes to ensure the accuracy of their systems."
The latest developments in this area include the introduction of new AI-powered fact-checking tools, which aim to detect and correct false information in real-time. However, experts caution that these solutions are still in their infancy and require further refinement.
As AI chatbots continue to play an increasingly prominent role in our lives, it is essential that we prioritize accuracy, transparency, and accountability in their development and deployment. Only by acknowledging the limitations of these systems can we ensure that they serve as a positive force for society.
Sources:
NewsGuard audit report (August 2025)
Interview with McKenzie Sadeghi, NewsGuard spokesperson
Research paper by Dr. Rachel Kim on AI ethics
Note: This article is based on the provided source material and has been rewritten in a neutral, third-person tone to meet AP Style guidelines.
*Reporting by Forbes.*