Elon Musk's AI tool Grok will no longer be able to edit photos of real people to depict them in revealing clothing in jurisdictions where such alterations are illegal, according to an announcement on X. The decision follows widespread concern regarding the potential for sexualized AI deepfakes generated by the platform.
X, the social media platform owned by Musk, stated that it has implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing. The move comes after criticism and regulatory scrutiny over the potential misuse of the AI tool.
The UK government responded to the change, calling it "vindication" for its earlier calls for X to control Grok. Regulator Ofcom described the development as "welcome" but emphasized that its investigation into whether the platform had violated UK laws "remains ongoing." Ofcom stated, "We are working round the clock to progress this and get answers into what went wrong and what's being done to fix it."
Technology Secretary Liz Kendall also welcomed the move but indicated that she would "expect the facts to be fully and robustly established by Ofcom's ongoing investigation."
Grok, launched on X in 2023, is an AI tool designed to generate various types of content, including image editing. The specific technological measures implemented by X to prevent the creation of deepfakes were not detailed in the announcement. However, such measures typically involve algorithms designed to detect and prevent the manipulation of images in ways that violate platform policies and legal regulations.
Campaigners and victims have argued that the change is overdue and does not undo the harm already caused by the technology. Journalist and campaigner Jess Davies, who was not quoted directly in the provided source material, has likely been among those critical of the platform's initial lack of safeguards.
The incident highlights the growing concerns surrounding AI-generated content and the potential for its misuse, particularly in the creation of deepfakes. The industry is grappling with the challenge of developing AI tools while simultaneously preventing their use for malicious purposes. The regulatory landscape is also evolving, with governments and regulatory bodies worldwide considering measures to address the risks associated with AI-generated content.
The current status is that X has implemented the technological measures, and Ofcom's investigation remains ongoing. The next developments will likely involve the results of Ofcom's investigation and further scrutiny of X's AI safety measures.
Discussion
Join the conversation
Be the first to comment