Elon Musk's AI tool Grok will no longer be able to edit photos of real people to depict them in revealing clothing in jurisdictions where such alterations are illegal. The announcement, made on X, the social media platform owned by Musk, follows widespread concern regarding the potential for sexualized AI deepfakes.
X stated that it has "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing." The move comes after criticism and regulatory scrutiny over the potential misuse of the AI tool.
The UK government responded to the change, calling it a "vindication" of its call for X to control Grok. Ofcom, the UK's communications regulator, described the development as "welcome" but emphasized that its investigation into whether X violated UK laws "remains ongoing." Ofcom stated, "We are working round the clock to progress this and get answers into what went wrong and what's being done to fix it."
Technology Secretary Liz Kendall also acknowledged the change but stated that she would "expect the facts to be fully and robustly established by Ofcom's ongoing investigation."
Grok, launched in 2023, is an AI tool integrated into the X platform. Its capabilities include image editing, raising concerns about the potential for misuse, particularly in creating non-consensual deepfakes. The implemented technological measures aim to prevent the AI from generating images that depict real individuals in sexually explicit or revealing contexts without their consent.
Campaigners and victims have argued that the change is overdue and does not undo the harm already caused. Jess Davies, a journalist and campaigner, has been vocal about the need for stronger safeguards against AI-generated abuse.
The specific technical measures implemented by X to prevent Grok from creating such images were not detailed in the announcement. However, such measures typically involve a combination of content filtering, image recognition algorithms, and restrictions on the types of prompts the AI can process. These filters are designed to detect and block requests that could lead to the creation of inappropriate or harmful content.
The incident highlights the growing challenges and ethical considerations surrounding AI-powered tools, particularly in the realm of image manipulation and deepfake technology. As AI technology advances, developers and platforms are facing increasing pressure to implement safeguards that prevent misuse and protect individuals from harm. The ongoing investigation by Ofcom underscores the potential for regulatory intervention and the need for platforms to comply with local laws and regulations regarding AI-generated content.
Discussion
Join the conversation
Be the first to comment