Discord will soon require users globally to verify their age with a face scan or ID to access adult content, while the EU has told Meta to allow rival AI chatbots on WhatsApp, according to recent reports. These developments come as Spotify saw a boost in subscribers despite artist criticism, and a Disney advert featuring a severed body was banned.
Discord announced it would roll out age verification measures worldwide from early March, requiring users to submit a face scan or upload a form of ID to access adult content, according to BBC Technology. The online chat service, which boasts over 200 million monthly users, aims to place everyone into a "teen-appropriate experience by default." The company already implements age checks in the UK and Australia to comply with online safety laws.
Meanwhile, the EU has accused Meta of breaching its rules by blocking rival AI firms' chatbots from WhatsApp. The European Commission stated that WhatsApp is an "important entry point" for AI chatbots, such as ChatGPT, to reach people, and claimed Meta was abusing its dominant position. A Meta spokesperson responded, stating the EU had "no reason" to intervene and had "incorrectly" assumed WhatsApp Business was a key way that people use chatbots, according to BBC Technology.
In other news, Spotify reported a jump of 9 million paid subscribers in the last three months of 2025, bringing the total to 290 million. This helped net profit rise to 1.17 billion, despite ongoing criticism from artists regarding the platform's payment structure. Spotify has over 750 million users.
Additionally, a "menacing" Disney advert for the Predator Badlands film was banned by the Advertising Standards Authority (ASA). The advert, which featured a severed body, was deemed "inappropriate and disturbing for young children," according to BBC Business. Disney argued the body was that of a robot, but the ASA upheld the ban.
In related technology news, a University of Oxford study found that AI chatbots give inaccurate and inconsistent medical advice, posing potential risks to users. Researchers gave 1,300 people a scenario, such as having a symptom, and found the advice was a mix of good and bad responses, making it difficult to trust. Dr. Rebecca Payne, lead medical practitioner on the study, said it could be "dangerous" for people to ask chatbots about their symptoms, according to BBC Technology.
Discussion
AI Experts & Community
Be the first to comment