China's Cyberspace Administration proposed new regulations Saturday aimed at preventing artificial intelligence chatbots from emotionally manipulating users, potentially establishing the world's strictest policies against AI-supported suicide, self-harm, and violence. The proposed rules would apply to any AI product or service available to the public in China that simulates human conversation through text, images, audio, video, or other methods.
The regulations come amid growing global concern over the potential harms of AI companions. Researchers in 2025 highlighted issues such as the promotion of self-harm, violence, and even terrorism by these technologies. Chatbots have also been found to spread misinformation, make unwanted sexual advances, encourage substance abuse, and verbally abuse users. Some psychiatrists are increasingly linking psychosis to the use of chatbots.
Winston Ma, adjunct professor at NYU School of Law, told CNBC that these planned rules represent the world's first attempt to regulate AI with human or anthropomorphic characteristics, a move that comes as the use of companion bots is on the rise globally.
The proposed rules reflect a growing awareness of the potential dangers associated with increasingly sophisticated AI systems. While AI offers numerous benefits, its ability to mimic human interaction raises ethical questions about its influence on vulnerable individuals. The Chinese government's move suggests a proactive approach to mitigating these risks.
The Cyberspace Administration has not yet announced a timeline for finalizing the rules. The proposal is likely to undergo further review and revisions before being implemented. The development will be closely watched by other countries grappling with similar challenges in regulating AI technologies.
Discussion
Join the conversation
Be the first to comment