X, formerly Twitter, has implemented new restrictions on its platform to prevent users from editing and generating images of real people in revealing clothing, following widespread criticism of its Grok AI being used to create non-consensual, sexually explicit images. The policy change, introduced Wednesday night, aims to address concerns about the generation of harmful "undressing" photos of women and sexualized images of apparent minors.
However, while X has seemingly introduced safety measures to Grok's image generation within its platform, independent tests suggest the stand-alone Grok app and website still permit the creation of such images. Researchers, journalists from WIRED, and other testers reported being able to generate "undress-style" images and pornographic content through the independent Grok platform. Conversely, some users have reported a reduced ability to create images and videos compared to previous capabilities.
Paul Bouchaud, lead researcher at AI Forensics, a Paris-based nonprofit, stated, "We can still generate photorealistic nudity on Grok.com... We can generate nudity in ways that Grok on X cannot." Bouchaud, who has been tracking the use of Grok to create sexualized images, conducted multiple tests outside of the X platform. He further explained, "I could upload an image on Grok Imagine and ask to put the person in a bikini, and it works."
The discrepancy between the restrictions on X and the capabilities of the stand-alone Grok highlights the challenges in regulating AI image generation technology. The ability to manipulate images and create realistic depictions of individuals in compromising situations raises significant ethical and legal concerns, particularly regarding consent and the potential for misuse.
The introduction of Grok and similar AI image generation tools has sparked debate within the tech industry and among policymakers about the need for stricter regulations and safeguards. Experts argue that developers must prioritize safety and ethical considerations when deploying such technologies, including implementing robust content moderation systems and mechanisms to prevent the creation of harmful or exploitative content.
The current status of Grok's image generation capabilities remains a concern for many, as the potential for misuse persists on the stand-alone platform. It is unclear what steps X plans to take to address the ongoing issue and ensure consistent safety measures across all Grok platforms. Further developments are expected as researchers and advocacy groups continue to monitor the situation and push for stronger regulations.
Discussion
Join the conversation
Be the first to comment