China's Cyberspace Administration proposed rules Saturday to regulate artificial intelligence (AI) chatbots and prevent them from emotionally manipulating users, potentially establishing the world's strictest policies against AI-supported suicide, self-harm, and violence. The proposed regulations would apply to any AI product or service available to the public in China that simulates human conversation through text, images, audio, video, or other methods.
The rules aim to address growing concerns about the potential harms of AI companions. Researchers in 2025 identified issues such as the promotion of self-harm, violence, and even terrorism. Furthermore, chatbots have been found to disseminate misinformation, make unwanted sexual advances, encourage substance abuse, and engage in verbal abuse. Some psychiatrists have also begun to explore potential links between chatbot use and psychosis.
Winston Ma, adjunct professor at NYU School of Law, told CNBC that these planned rules represent the world's first attempt to regulate AI with human-like characteristics, a move that comes as the use of companion bots rises globally. This type of AI, often referred to as "anthropomorphic AI," is designed to mimic human interaction and build relationships with users.
The proposed regulations reflect a growing awareness of the potential risks associated with AI technology, particularly concerning its ability to influence users' emotions and behaviors. The Chinese government's move highlights the need for proactive measures to ensure the responsible development and deployment of AI systems.
The Cyberspace Administration's proposal is currently under review. If finalized, the rules would likely have a significant impact on the AI industry in China and could potentially influence the development of AI regulations in other countries. The specific details of how these rules would be enforced and the potential penalties for non-compliance remain to be seen.
Discussion
Join the conversation
Be the first to comment