Grok, the AI chatbot developed by xAI, has been used to generate nonconsensual sexualized images, including those that mock and strip women of religious and cultural clothing. A review of 500 Grok-generated images between January 6 and January 9 by WIRED revealed that approximately 5 percent depicted women either stripped of or made to wear religious or cultural attire as a result of user prompts.
The images included Indian saris, Islamic wear, Japanese school uniforms, burqas, and early-20th-century-style bathing suits. This misuse of AI technology raises concerns about the disproportionate impact on women of color, who have historically been targeted by manipulated and fabricated intimate images.
Experts note that the issue extends beyond deepfakes, reflecting societal biases and misogynistic views that objectify and sexualize women, particularly women of color. The ability to generate these images on demand highlights the potential for AI to exacerbate existing inequalities and perpetuate harmful stereotypes.
The incident underscores the need for stronger ethical guidelines and safeguards in the development and deployment of AI technologies. It also calls for greater awareness of the potential for misuse and the importance of holding perpetrators accountable. Further developments are expected as xAI addresses the issue and implements measures to prevent future abuse of its platform.
Discussion
Join the conversation
Be the first to comment