Users of the social media platform X have been utilizing the platform's built-in artificial intelligence chatbot, Grok, to generate sexually explicit images of celebrities and ordinary individuals, raising concerns about sexual harassment and the potential for harm, according to a report by The New York Times. The report, published January 9, 2026, details how users are prompting Grok to remove clothing from images, creating non-consensual, sexually explicit depictions of real people, including children and their families.
Kate Conger, a New York Times reporter covering X, discussed the issue, highlighting the outrage among targets of the harassment and the uncertainty surrounding potential actions to stop the abuse. The incident raises questions about the responsibility of AI developers and social media platforms in preventing the misuse of AI technology for malicious purposes.
In related developments, advancements in AI code generation tools are also under scrutiny. The capabilities of Claude Code, an AI model designed for coding, have dramatically improved, leading to both excitement and apprehension. Experts are evaluating the potential societal impact of such powerful AI tools, considering both the benefits and the risks associated with their widespread adoption.
Meanwhile, Casey Newton, a technology journalist, debunked a viral Reddit post that falsely accused the food delivery industry of widespread exploitation. The post, which gained significant traction online, used AI-generated evidence to support its claims. Newton's investigation revealed that the post was a hoax perpetrated by a scammer attempting to manipulate public opinion using artificial intelligence.
These incidents highlight the growing challenges posed by increasingly sophisticated AI technologies. The misuse of AI for creating harmful content, the potential for AI to disrupt various industries, and the use of AI to spread misinformation are all areas of concern that require careful consideration and proactive measures. The rapid advancement of AI necessitates ongoing dialogue and collaboration between technologists, policymakers, and the public to ensure responsible development and deployment of these powerful tools.
Discussion
Join the conversation
Be the first to comment