The government has urged Ofcom, the UK's communications regulator, to consider utilizing its full range of powers, potentially including a ban, against the social media platform X over concerns regarding unlawful artificial intelligence-generated images circulating on the site. This action stems from growing apprehension about X's AI model, Grok, being used to create deepfakes, specifically those that digitally undress individuals in images.
Ofcom's authority, granted under the Online Safety Act, allows it to seek court orders that could prevent third-party organizations from providing financial support or enabling access to X within the UK. The government's heightened concern is driven by the potential for Grok to generate sexually explicit images, including those depicting children.
Prime Minister Sir Keir Starmer condemned the creation of such images, stating, "This is disgraceful. It's disgusting. And it's not to be tolerated. Ofcom has our full support to take action in relation to this." He further emphasized the government's stance, adding, "It's unlawful. We're not going to tolerate it. I've asked for all options to be on the table." Government sources confirmed to BBC News that Ofcom is expected to explore all available measures in response to the Grok-related issues on X.
Deepfakes, a type of synthetic media, utilize AI, particularly deep learning techniques, to create realistic but fabricated images, videos, or audio recordings. The technology raises significant ethical and legal questions, especially when used to generate non-consensual intimate images or spread disinformation. The ability of AI to convincingly alter or fabricate content poses a challenge to verifying information and protecting individuals from harm.
The Online Safety Act grants Ofcom considerable power to regulate online platforms and address harmful content. This includes the ability to fine companies, block access to websites, and potentially hold individual executives liable for failures to protect users. The Act aims to establish a framework for online safety, requiring platforms to remove illegal content and protect users from harm.
The situation highlights the ongoing debate about the regulation of AI and its potential impact on society. As AI technology continues to advance, policymakers and regulators are grappling with how to balance innovation with the need to protect individuals from the potential harms associated with its misuse. The outcome of Ofcom's investigation and any subsequent actions taken against X will likely set a precedent for how the UK regulates AI-generated content on online platforms. The regulator is now expected to review the evidence and determine the appropriate course of action, considering the full scope of its powers under the Online Safety Act.
Discussion
Join the conversation
Be the first to comment