The move comes after significant backlash and scrutiny, including an ongoing investigation by Ofcom, the UK's communications regulator, into whether X has violated UK laws. Ofcom stated that the change was a "welcome development" but emphasized that its investigation "remains ongoing." The regulator added, "We are working round the clock to progress this and get answers into what went wrong and what's being done to fix it."
Technology Secretary Liz Kendall also acknowledged the change, stating she would "expect the facts to be fully and robustly established by Ofcom's ongoing investigation."
Grok, launched on X in 2023, is a generative AI model designed to generate text and images. The specific technological measures implemented to prevent the creation of deepfakes were not detailed in the announcement. However, such measures often involve image recognition algorithms that identify human subjects and prevent their manipulation, as well as content filters that block the generation of sexually suggestive material.
Campaigners and victims have argued that the change is overdue and does not undo the harm already caused by the technology. Jess Davies, a journalist and campaigner, has been vocal about the issue.
The UK government described the move as "vindication" for its calls for X to control Grok's capabilities. The incident underscores the growing concerns surrounding the potential misuse of AI technologies, particularly in the creation of non-consensual intimate imagery. The industry impact is likely to include increased pressure on AI developers to implement safeguards and ethical guidelines to prevent similar abuses. The current status is that the technological measures are in place, but Ofcom's investigation continues to assess the full extent of the issue and X's compliance with UK law.
Discussion
Join the conversation
Be the first to comment