Breaking News: AI Models Rely on Flawed Research from Retracted Scientific Papers
A recent study has revealed that some AI chatbots are using material from retracted scientific papers to answer questions, raising concerns about the reliability of AI tools in evaluating scientific research. According to MIT Technology Review, researchers at the University of Tennessee in Memphis found that OpenAI's ChatGPT, running on the GPT-4o model, referenced retracted papers in five out of 21 cases when asked medical imaging-related questions.
Timeline and Details
The study, led by Weikuan Gu, a medical researcher at the University of Tennessee in Memphis, was conducted between January and March 2023. The researchers asked ChatGPT to answer questions based on information from 21 retracted papers about medical imaging. In five cases, the chatbot referenced these retracted papers, but advised caution only three times.
Immediate Impact and Response
The findings have sparked concerns among scientists and policymakers about the reliability of AI tools in evaluating scientific research. "If people only look at the content of the answer and do not click through to the paper and see that it's been retracted, that's really a problem," said Gu. The study highlights the need for more robust fact-checking mechanisms in AI systems.
Background Context
Retracted papers are those that have been withdrawn from publication due to errors or flaws in their research methodology. While AI chatbots can fabricate links and references, using actual retracted papers to answer questions can be misleading if not properly evaluated.
What Happens Next
The study's findings will likely prompt further investigation into the use of retracted papers by AI models. As countries and industries invest heavily in AI tools for scientists, it is essential to ensure that these systems are reliable and trustworthy. The researchers plan to continue their work on developing more robust fact-checking mechanisms for AI systems.
This breaking news story highlights the need for ongoing research into the limitations of AI tools and the importance of verifying the accuracy of scientific information. As AI continues to play an increasingly significant role in our lives, it is crucial that we address these concerns to ensure the integrity of scientific research and decision-making processes.
*This story is developing. Information compiled from Technologyreview reporting.*