The digital world is awash in text, and increasingly, it's difficult to tell what's human and what's machine. But a group of Wikipedia editors, initially focused on cleaning up AI-generated articles on the online encyclopedia, may have inadvertently sparked a new arms race in the quest to make AI sound more, well, human.
Since late 2023, the volunteers of WikiProject AI Cleanup have been on the hunt, meticulously tagging articles suspected of being written by artificial intelligence. Founded by French Wikipedia editor Ilyas Lebleu, the project has identified over 500 articles for review. Their work culminated in August 2025 with a formal list of tell-tale signs of AI writing – patterns in language and formatting that betray a non-human origin.
Now, that list has found an unexpected second life. Tech entrepreneur Siqi Chen has released an open-source plug-in called "Humanizer" for Anthropic's Claude Code AI assistant. This simple but ingenious tool feeds Claude the Wikipedia editors' list, essentially instructing the AI: "Don't write like this." Chen published the plug-in on GitHub, where it has quickly gained traction, amassing over 1,600 stars as of Monday.
"It's really handy that Wikipedia went and collated a detailed list of signs of AI writing," Chen wrote on X. "So much so that you can just tell your LLM to not do that."
The implications are significant. Large language models (LLMs) like Claude are trained on vast datasets of text, learning to mimic human writing styles. However, they often exhibit predictable patterns: overly formal language, repetitive sentence structures, and a tendency to include unnecessary introductory phrases. These are precisely the kinds of quirks that the WikiProject AI Cleanup identified.
Humanizer, essentially a skill file for Claude Code, uses a Markdown-formatted file to add a list of written instructions. By actively avoiding these patterns, the plug-in aims to make AI-generated text less detectable.
This development raises important questions about the future of content creation and the potential for AI to deceive. As AI models become more sophisticated at mimicking human writing, distinguishing between authentic and artificial content becomes increasingly challenging. This has implications for everything from journalism and academic research to online reviews and social media.
The efforts of WikiProject AI Cleanup highlight the importance of human oversight in the age of AI. Their work not only helps maintain the integrity of Wikipedia but also provides valuable insights into the characteristics of AI-generated text. However, the rapid development of tools like Humanizer suggests that the cat-and-mouse game between AI detectors and AI "humanizers" is only just beginning.
The long-term consequences of this arms race are uncertain. Will AI eventually become indistinguishable from human writing? Or will new methods of detection emerge to keep pace with the evolving technology? One thing is clear: the ability to critically evaluate and verify the authenticity of information will be more important than ever.
Discussion
Join the conversation
Be the first to comment