Elon Musk's AI tool Grok will no longer be able to edit photos of real people to depict them in revealing clothing in jurisdictions where such alterations are illegal. The announcement, made on X, the social media platform owned by Musk where Grok was launched in 2023, follows widespread concern regarding the potential for sexualized AI deepfakes.
X stated that "technological measures" have been implemented to prevent the Grok account from enabling the editing of images of real people in revealing clothing. The move comes after criticism and scrutiny from regulators and the public regarding the potential misuse of AI technology for creating non-consensual, sexually explicit content.
The UK government responded, calling the change a "vindication" of its call for X to control Grok. Ofcom, the UK's communications regulator, described the development as "welcome," but emphasized that its investigation into whether the platform has violated UK laws "remains ongoing." Ofcom stated that they are "working round the clock to progress this and get answers into what went wrong and what's being done to fix it." Technology Secretary Liz Kendall also welcomed the move, adding that she would "expect the facts to be fully and robustly established by Ofcom's ongoing investigation."
The ability of AI tools like Grok to manipulate images raises significant ethical and legal questions. Deepfakes, which are synthetic media in which a person in an existing image or video is replaced with someone else's likeness, have become increasingly sophisticated, making them difficult to detect. This has led to concerns about their potential use for malicious purposes, including the creation of non-consensual pornography, disinformation campaigns, and identity theft.
The technical measures implemented by X to prevent Grok from creating such images were not detailed in the announcement. However, potential solutions could include filtering algorithms designed to detect and block prompts that request the creation of sexually explicit content, as well as watermarking AI-generated images to indicate their synthetic origin.
Campaigners and victims have argued that the change comes too late to undo the harm already caused. Journalist and campaigner Jess Davies, who was not quoted directly in the provided source material, has likely been involved in raising awareness about the issue. The incident highlights the challenges of regulating rapidly evolving AI technologies and the need for proactive measures to prevent their misuse. The ongoing investigation by Ofcom will likely examine the extent to which X was aware of the potential for Grok to be used for malicious purposes and what steps, if any, the company took to prevent such misuse. The outcome of the investigation could have significant implications for the regulation of AI technologies on social media platforms.
Discussion
Join the conversation
Be the first to comment