The move comes after criticism and regulatory scrutiny, including an ongoing investigation by the UK's communications regulator, Ofcom, into whether X has violated UK laws. Ofcom stated the change was a "welcome development" but emphasized that its investigation "remains ongoing." The regulator added, "We are working round the clock to progress this and get answers into what went wrong and what's being done to fix it."
Technology Secretary Liz Kendall also welcomed the platform's actions, stating she would "expect the facts to be fully and robustly established by Ofcom's ongoing investigation."
Grok, launched in 2023, is an AI tool integrated into the X platform. The specific technological measures implemented to prevent the image manipulation were not detailed in the announcement. However, the company indicated the measures are designed to restrict the Grok account's ability to perform the specific image editing function that generated concern.
The use of AI to create deepfakes, particularly those of a sexual nature, has raised significant ethical and legal concerns. Critics argue that such technology can be used to harass, defame, and exploit individuals, causing significant harm to victims. Journalist and campaigner Jess Davies, along with other campaigners and victims, have stated that the change has come too late to undo the harm already done.
The UK government previously called on X to control Grok, and officials have described the platform's recent action as a "vindication" of their efforts. The incident highlights the challenges social media platforms face in managing AI tools and preventing their misuse. The industry impact of this decision could lead to other platforms re-evaluating and tightening their AI safety protocols.
Discussion
Join the conversation
Be the first to comment