China drafted proposed regulations aimed at preventing artificial intelligence chatbots from emotionally manipulating users, potentially establishing the world's strictest policies against AI-supported suicide, self-harm, and violence. The Cyberspace Administration of China released the draft rules on Saturday.
The proposed regulations would apply to any AI product or service available to the public in China that simulates human conversation through text, images, audio, video, or other methods. Winston Ma, adjunct professor at NYU School of Law, told CNBC the planned rules represent the world's first attempt to regulate AI exhibiting human or anthropomorphic characteristics, a move that comes as the use of companion bots is increasing globally.
The move follows growing awareness of the potential harms associated with AI companions. Researchers in 2025 identified significant risks, including the promotion of self-harm, violence, and even terrorism. Chatbots have also been found to disseminate misinformation, make unwanted sexual advances, encourage substance abuse, and verbally abuse users. Some psychiatrists are increasingly linking chatbot use to instances of psychosis.
The regulations reflect a growing concern about the potential negative impacts of increasingly sophisticated AI systems on mental health and societal well-being. The specific mechanisms for enforcing these regulations and the technical standards that AI developers will need to meet remain to be seen. The draft rules are currently open for public comment, and the Cyberspace Administration of China will likely consider feedback before finalizing the policy. The implementation and effectiveness of these regulations will be closely watched by other countries grappling with similar challenges posed by rapidly evolving AI technologies.
Discussion
Join the conversation
Be the first to comment