Elon Musk's AI tool Grok will no longer be able to edit photos of real people to depict them in revealing clothing in jurisdictions where such alterations are illegal, according to an announcement on X. The decision follows widespread concern regarding the potential for sexualized AI deepfakes generated by the platform.
X, the social media platform owned by Musk, stated that it has "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing." The move comes after criticism and regulatory scrutiny over the potential misuse of the AI tool.
The UK government responded to the change, calling it a "vindication" of its calls for X to control Grok. Regulator Ofcom described the development as "welcome," but emphasized that its investigation into whether the platform violated UK laws "remains ongoing." Ofcom stated, "We are working round the clock to progress this and get answers into what went wrong and what's being done to fix it."
Technology Secretary Liz Kendall also welcomed the change but indicated that she would "expect the facts to be fully and robustly established by Ofcom's ongoing investigation."
Grok, launched on X in 2023, is an AI tool designed to generate various types of content, including image editing. The specific technological measures implemented by X to prevent the creation of deepfakes involving revealing clothing were not disclosed. However, such measures often involve a combination of image recognition algorithms, content filtering, and user reporting mechanisms. Image recognition algorithms can be trained to identify and flag images that depict nudity or suggestive content. Content filters can then be used to block the generation or sharing of such images. User reporting mechanisms allow users to flag potentially inappropriate content for review by X's moderation team.
The incident highlights the growing concerns surrounding the potential for AI to be used to create deepfakes and other forms of manipulated content. These concerns have led to increased regulatory scrutiny of AI platforms and calls for greater transparency and accountability in the development and deployment of AI technologies.
Campaigners and victims have expressed concerns that the change has come too late to undo the harm already caused by the technology. Journalist and campaigner Jess Davies, who was not quoted directly in X's announcement, has been vocal about the potential for AI deepfakes to be used to harass and intimidate individuals.
Discussion
Join the conversation
Be the first to comment