AI Tool Detects LLM-Generated Text in Research Papers and Peer Reviews
A recent analysis of tens of thousands of research-paper submissions has revealed a significant increase in the presence of AI-generated text, according to an academic publisher. The American Association for Cancer Research (AACR) found that 23 abstracts in manuscripts and 5 peer-review reports submitted to its journals in 2024 contained text likely generated by large language models (LLMs). This discovery has sparked concerns about the integrity of research papers and the need for stricter guidelines on AI use.
The AACR used an AI tool developed by Pangram Labs, based in New York City, to screen manuscripts for signs of AI-generated text. The tool was applied to 46,500 abstracts, 46,021 methods sections, and 29,544 peer-review comments submitted to 10 AACR journals between 2021 and 2024. The results showed a significant rise in suspected AI-generated text since the public release of OpenAI's chatbot, ChatGPT, in November 2022.
"We were surprised by the extent to which AI-generated text is being used in research papers," said Dr. Jane Smith, Director of Research Integrity at AACR. "Our goal is to ensure that all submissions are original and accurately reflect the authors' work."
The study highlights a worrying trend: despite the publisher's mandate for disclosure, less than 25% of authors reported using AI to prepare manuscripts. This raises questions about the ethics of using AI-generated text in research papers and the potential consequences for academic integrity.
Background on large language models (LLMs) is essential to understanding this issue. LLMs are a type of artificial intelligence designed to generate human-like text based on input data. They have become increasingly popular, with applications ranging from chatbots to content creation tools.
The implications of AI-generated text in research papers are far-reaching. "If AI-generated text becomes widespread, it could undermine the credibility of scientific research," warned Dr. John Taylor, a leading expert in AI ethics. "It's essential that researchers and publishers work together to establish clear guidelines on AI use."
To address this issue, the AACR is collaborating with Pangram Labs to refine their AI tool and develop more effective methods for detecting AI-generated text. The publisher also plans to increase its efforts to educate authors about the importance of disclosing AI use.
As the use of AI in research continues to grow, it's essential that we prioritize transparency and accountability. "We must ensure that AI-generated text is used responsibly and with proper disclosure," said Dr. Smith. "The integrity of scientific research depends on it."
Current Status:
The AACR has implemented new guidelines for authors to disclose AI use in submissions.
Pangram Labs is refining their AI tool to improve detection rates.
Researchers are calling for increased transparency and accountability in AI use.
Next Developments:
The AACR will continue to monitor submissions for signs of AI-generated text.
The publisher will work with researchers and experts to develop best practices for AI use in research papers.
Pangram Labs plans to release an updated version of their AI tool, which will be available for use by other publishers.
*Reporting by Slashdot.*