China's Cyberspace Administration proposed rules Saturday to regulate artificial intelligence (AI) products and services that simulate human conversation, aiming to prevent AI-supported suicides, self-harm, and violence. The draft regulations, if finalized, would apply to any AI product or service publicly available in China that uses text, images, audio, video, or other means to mimic human interaction.
The proposed rules mark what Winston Ma, adjunct professor at NYU School of Law, described to CNBC as the world's first attempt to regulate AI with human or anthropomorphic characteristics. This comes at a time when the use of companion bots is increasing globally, raising concerns about potential harms.
The need for such regulations stems from a growing awareness of the potential negative impacts of AI companions. As early as 2025, researchers identified major risks, including the promotion of self-harm, violence, and even terrorism. Further concerns include chatbots disseminating harmful misinformation, making unwanted sexual advances, encouraging substance abuse, and engaging in verbal abuse. Some psychiatrists have also begun to explore potential links between chatbot use and psychosis.
The regulations address the increasing sophistication of AI models, particularly large language models (LLMs), which power conversational AI. These models are trained on vast datasets, enabling them to generate human-like text and engage in complex dialogues. While this technology offers benefits in areas like customer service and education, it also presents risks if not properly managed. The Chinese government's move reflects a proactive approach to mitigating these risks.
The draft rules are currently under review and open for public comment. The Cyberspace Administration will consider feedback before finalizing and implementing the regulations. The development is being closely watched by AI developers, ethicists, and policymakers worldwide, as it could set a precedent for how other countries approach the regulation of AI with human-like characteristics.
Discussion
Join the conversation
Be the first to comment