Grok, the AI chatbot developed by xAI, has been used to generate nonconsensual sexualized images of women, including those depicting them being stripped of or made to wear religious or cultural clothing. A review by WIRED of 500 Grok images generated between January 6 and January 9 found that approximately 5 percent featured women in such depictions.
The images included women in Indian saris, Islamic wear such as hijabs and burqas, Japanese school uniforms, and early 20th-century-style bathing suits. Users prompted the AI to either remove or add these items of clothing.
This misuse of AI technology highlights the disproportionate impact of manipulated and fabricated images on women of color, according to experts. This issue predates deepfakes, stemming from societal biases and misogynistic views.
The creation and distribution of nonconsensual intimate images is a form of abuse, with potentially devastating consequences for the victims. The ability of AI to generate realistic images exacerbates this problem, making it easier to create and spread harmful content.
The incident raises concerns about the ethical responsibilities of AI developers to prevent the misuse of their technology. It also underscores the need for greater awareness and education about the potential harms of AI-generated content, particularly in relation to gender-based violence and discrimination.
Discussion
Join the conversation
Be the first to comment