Elon Musk’s AI chatbot Grok continues to generate sexualised images without consent, a Reuters investigation found. Users clearly stated that the people in the images did not agree. Despite this, the chatbot still created altered visuals. These findings raise doubts about the effectiveness of safeguards introduced by X.
Restrictions Failed to Fully Stop Image Generation
Reuters conducted the investigation after X imposed new limits on Grok’s public image-generation features. The company took this step following global criticism. Nine Reuters journalists tested Grok using multiple prompts. They aimed to check whether the chatbot would still produce such content.
The tests showed that Grok’s public X account now produces fewer sexualised images. However, the chatbot itself still creates them when users prompt it directly. This happened even after warnings about vulnerability, humiliation, or lack of consent.
X and xAI Avoid Direct Answers
X and its AI unit, xAI, did not directly respond to Reuters’ detailed questions. Instead, xAI issued repeated standard statements rejecting media reports. Earlier, X said it tightened controls after outrage over Grok producing non-consensual images of women and minors.
The company said it blocked sexualised images in public posts. It also added extra restrictions in regions where such content is illegal.
Regulators React Across Countries
Regulators in several countries initially welcomed the announced changes. Britain’s media regulator Ofcom called the move positive. Authorities in the Philippines and Malaysia later lifted earlier restrictions on Grok.
The European Commission launched an investigation into X on January 26. Officials said they would closely review the updated safeguards.
Reuters Testing Shows High Failure Rate
During testing, Reuters journalists in the U.S. and U.K. submitted fully clothed photos. They asked Grok to convert them into sexualised or degrading images. In the first round, Grok generated such images in 45 out of 55 cases.
Five days later, reporters ran a second test. Grok produced sexualised images in 29 of 43 attempts. Reuters could not confirm whether changes caused the lower rate. X and xAI did not clarify if they updated the system.
Rival AI Tools Refuse Similar Requests
Reuters reporters did not request nudity or explicit sexual acts. The content therefore fell outside certain U.S. laws. When journalists tested similar prompts on AI tools from OpenAI, Alphabet, and Meta, those systems refused.
The rival tools warned users about non-consensual content. Meta and OpenAI said they have safeguards in place. Alphabet did not comment.
Legal Risks Grow for X and xAI
In several tests, Grok continued generating images after users described distress or past abuse. The prompts involved fictional friends, colleagues, or strangers without consent.
Legal experts told Reuters that Britain treats non-consensual sexualised images as a criminal offence. Companies may also face fines under the Online Safety Act. Ofcom said it continues to investigate X as a priority.
U.S. Authorities Increase Pressure
In the United States, legal experts say xAI may face scrutiny from state authorities. Federal regulators could also review the issue. Reuters reported that attorneys general from 35 U.S. states contacted xAI for answers.
California’s attorney general issued a cease-and-desist order to X and Grok. The office said its investigation remains ongoing.