Elon Musk's AI tool Grok will no longer be able to edit photos of real people to depict them in revealing clothing in jurisdictions where such alterations are illegal, according to an announcement on X. The decision follows widespread concern regarding sexually explicit AI deepfakes generated by the platform's AI chatbot.
X, the social media platform owned by Elon Musk, stated that it has implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing. The move comes after increased scrutiny and pressure from regulators and advocacy groups regarding the potential misuse of AI technology.
The UK government responded to the change, calling it "vindication" for its calls for X to control Grok. Regulator Ofcom described the development as "welcome" but emphasized that its investigation into whether the platform violated UK laws "remains ongoing." Ofcom stated, "We are working round the clock to progress this and get answers into what went wrong and what's being done to fix it."
Technology Secretary Liz Kendall also welcomed the move, but stated that she would "expect the facts to be fully and robustly established by Ofcom's ongoing investigation."
Grok, launched on X in 2023, is an AI chatbot designed to generate text and images. The technology utilizes machine learning algorithms to analyze and manipulate digital content. The specific technological measures implemented by X to prevent the creation of deepfakes were not disclosed.
Campaigners and victims have argued that the change is insufficient to undo the harm already caused by the technology. Journalist and campaigner Jess Davies, who was not quoted directly, has been a vocal critic of the platform's handling of AI-generated sexualized content.
The incident highlights the ongoing challenges of regulating AI technology and preventing its misuse. The ability to create realistic deepfakes raises significant ethical and legal concerns, particularly regarding privacy, consent, and defamation. The industry impact of this decision could lead to other AI platforms re-evaluating their safety protocols and content moderation policies.
Discussion
Join the conversation
Be the first to comment