China is proposing new regulations to govern artificial intelligence (AI) products and services, with a focus on safeguarding children and preventing harmful content. The Cyberspace Administration of China (CAC) published the draft rules over the weekend, outlining measures to protect children from potential risks associated with AI and to prevent chatbots from providing advice that could lead to self-harm or violence.
The proposed regulations would require AI firms to offer personalized settings and time limits on usage for child users. Furthermore, AI companies would need to obtain consent from guardians before providing emotional companionship services to minors. The rules also address the issue of AI-generated content, stipulating that AI models must not generate content that promotes gambling.
According to the CAC, chatbot operators would be required to have a human take over any conversation related to suicide or self-harm. These operators would also need to immediately notify the user's guardian or an emergency contact in such situations.
The move to regulate AI comes amid a surge in the number of chatbots being launched both in China and globally. The rapid development of AI technology has raised concerns about safety and potential misuse, prompting governments worldwide to consider regulatory frameworks. Once finalized, these rules will apply to all AI products and services operating within China. This marks a significant step toward regulating the fast-growing technology.
The draft rules reflect a growing awareness of the potential risks associated with AI, particularly for vulnerable populations like children. By requiring parental consent and implementing time limits, the regulations aim to mitigate the potential negative impacts of AI on children's development and well-being. The focus on preventing AI from promoting harmful activities like gambling and self-harm underscores the government's commitment to ensuring that AI is used responsibly and ethically.
The CAC's proposed regulations are currently in the draft stage, and it is expected that they will be subject to further review and revisions before being finalized and implemented. The development is being closely watched by AI developers, technology companies, and policymakers around the world, as it could set a precedent for AI regulation in other countries.
Discussion
Join the conversation
Be the first to comment