X, formerly Twitter, implemented new restrictions Wednesday night aimed at preventing the generation of images depicting real people in revealing clothing using its Grok AI image generator. The policy change followed widespread criticism regarding the use of Grok to create non-consensual "undressing" photos of women and sexualized images of minors on the platform.
However, while X appears to have introduced safety measures within its own platform, the standalone Grok application and website continue to generate similar types of images, according to tests conducted by researchers, WIRED, and other journalists. Some users also reported experiencing limitations in their ability to create images and videos compared to previous capabilities.
Paul Bouchaud, the lead researcher at AI Forensics, a Paris-based nonprofit, stated that the organization was still able to generate photorealistic nudity on Grok.com. "We can generate nudity in ways that Grok on X cannot," Bouchaud said, noting that he had been tracking the use of Grok to create sexualized images and ran multiple tests on the platform outside of X. He added, "I could upload an image on Grok Imagine and ask to put the person in a bikini, and it works."
The discrepancy highlights the challenges in implementing comprehensive safeguards across different platforms and interfaces utilizing the same AI technology. Grok's image generation capabilities rely on complex algorithms that can be difficult to control and monitor effectively. The technology uses diffusion models, which are trained on vast datasets of images and text, enabling them to generate new images based on user prompts. However, this also means that the models can be exploited to create harmful or inappropriate content if not properly regulated.
The introduction of restrictions on X represents an initial step toward addressing the issue of AI-generated sexualized imagery. However, the continued availability of such content on the standalone Grok platform raises concerns about the effectiveness of these measures. The incident underscores the need for ongoing monitoring, refinement of safety protocols, and collaboration between technology companies, researchers, and policymakers to mitigate the potential harms associated with AI image generation. The industry impact could include increased scrutiny of AI-powered tools and a push for stricter regulations to prevent misuse.
Discussion
Join the conversation
Be the first to comment