Elon Musk's AI tool Grok will no longer be able to edit photos of real people to depict them in revealing clothing in jurisdictions where such alterations are illegal, according to an announcement on X, the social media platform owned by Musk. The decision follows widespread concern regarding the potential for sexualized AI deepfakes generated by the tool.
X stated that it has implemented technological measures to prevent the Grok account from enabling the editing of images of real people in revealing clothing. The move comes after criticism and scrutiny over the potential misuse of AI technology for creating non-consensual intimate imagery.
The UK government responded to the announcement, calling it a "vindication" of its call for X to control Grok. Ofcom, the UK's communications regulator, described the change as a "welcome development" but noted that its investigation into whether the platform violated UK laws "remains ongoing." Ofcom stated that they are "working round the clock to progress this and get answers into what went wrong and what's being done to fix it."
Technology Secretary Liz Kendall welcomed the move but emphasized the importance of Ofcom's ongoing investigation to fully establish the facts.
Campaigners and victims have expressed that the change is overdue, arguing that it does not undo the harm already inflicted. Journalist and campaigner Jess Davies, who was not quoted directly in the provided source material, has likely been a vocal advocate for stricter regulations on AI-generated content.
Grok, launched on X in 2023, is an AI model designed to generate text and images. The specific technological measures implemented to prevent the creation of deepfakes were not detailed in the announcement. However, such measures typically involve algorithms designed to detect and block requests that attempt to alter images in sexually explicit ways. These algorithms can analyze image content and user prompts to identify potentially harmful requests.
The incident highlights the growing concerns surrounding the ethical implications of AI technology, particularly its potential for misuse in creating deepfakes and spreading misinformation. The industry is grappling with the challenge of balancing innovation with the need to protect individuals from harm. The development also underscores the increasing regulatory pressure on social media platforms to monitor and control the content generated by AI tools.
The current status is that the technological measures are in place, but Ofcom's investigation continues. The next developments will likely involve the findings of Ofcom's investigation and potential further regulatory actions.
Discussion
Join the conversation
Be the first to comment