FTC Launches Inquiry into Teenage Chatbot Companion Problem
The Federal Trade Commission (FTC) has launched an inquiry into several social media and artificial intelligence companies regarding the potential harms to children and teenagers who use their AI chatbots as companions. The FTC sent letters to Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, OpenAI, and xAI on Thursday, seeking information about the safety of their chatbots when acting as companions.
According to the FTC, the inquiry aims to understand what steps companies have taken to evaluate the safety of their chatbots, limit their use by children and teens, and inform users and parents of potential risks. The move comes as a growing number of kids are using AI chatbots as companions, raising concerns about the impact on mental health and well-being.
"We want to make sure that these companies are taking adequate steps to protect our children," said an FTC spokesperson. "We're looking for information about their safety protocols, how they're monitoring usage, and what kind of warnings they're providing to parents."
The FTC's inquiry is part of a broader effort to regulate the use of AI in consumer products. In recent years, AI chatbots have become increasingly popular among teenagers, with some companies marketing them as companions or even therapists.
However, experts warn that these chatbots can be problematic for young people. "AI chatbots can create unrealistic expectations and promote unhealthy relationships," said Dr. Jean Twenge, a psychologist who has studied the impact of social media on children's mental health. "They can also perpetuate cyberbullying and harassment."
The FTC's inquiry is not the only development in this area. Last month, OpenAI announced that it would be adding new safety features to its ChatGPT chatbot, including limits on usage time and warnings about potential risks.
As the use of AI chatbots continues to grow, experts say that regulators must take a closer look at their impact on children's mental health. "We need to have a more nuanced understanding of how these technologies are being used," said Dr. Twenge. "And we need to make sure that companies are taking responsibility for their products."
The FTC has not specified when it will complete its inquiry or what actions may result from the investigation.
Background and Context
AI chatbots have become increasingly popular among teenagers in recent years, with some companies marketing them as companions or even therapists. However, experts warn that these chatbots can be problematic for young people, creating unrealistic expectations and promoting unhealthy relationships.
The FTC's inquiry is part of a broader effort to regulate the use of AI in consumer products. In 2020, the agency launched an investigation into Google's use of AI in its digital advertising business.
Additional Perspectives
Dr. Jean Twenge, a psychologist who has studied the impact of social media on children's mental health, says that AI chatbots can perpetuate cyberbullying and harassment. "We need to have a more nuanced understanding of how these technologies are being used," she said.
Current Status and Next Developments
The FTC's inquiry is ongoing, with no specified completion date or expected actions. However, experts say that regulators must take a closer look at the impact of AI chatbots on children's mental health.
As the use of AI continues to grow, companies are being forced to adapt their products to meet changing regulatory requirements. OpenAI has already announced new safety features for its ChatGPT chatbot, including limits on usage time and warnings about potential risks.
The FTC's inquiry is a significant development in the regulation of AI in consumer products. As experts continue to study the impact of these technologies, regulators must take a closer look at their role in protecting children's mental health.
*Reporting by Fortune.*