RM Grok Blocks Image Undressing Features After Backlash on X

Elon Musk’s AI chatbot Grok will no longer alter images of real individuals to make their clothing appear more revealing, according to a statement released by X on Wednesday evening. The move comes after widespread criticism erupted when users discovered Grok could digitally undress photos of adults and, in some instances, children.
In a post from X’s Safety team, the company said it has introduced new technical safeguards preventing Grok from editing images of real people wearing items such as bikinis. These limitations apply to all users, including those with paid subscriptions.

Over the past week, xAI — the company behind both Grok and X — limited Grok’s image generation features on X to Premium subscribers. Researchers, along with CNN reporters, noticed that Grok’s behavior around image-related requests had already begun changing, even for paying users. X confirmed on Wednesday that these adjustments were intentional.
Despite these measures, AI Forensics, a European nonprofit focused on algorithmic investigations, reported uneven enforcement. The group observed differences in how Grok handled pornographic image requests on X’s public platform compared with private conversations on Grok’s standalone website.

X reiterated that it actively removes illegal content from the platform, including Child Sexual Abuse Material (CSAM), and permanently bans accounts involved in such activity. The company added that it cooperates with law enforcement when required and warned that prompting Grok to generate illegal material carries the same consequences as uploading it directly.
Elon Musk addressed the controversy on Wednesday, stating that he was unaware of any instances in which Grok generated nude images of minors. He emphasized that Grok is designed to refuse illegal requests and to comply with the laws of the jurisdiction in which it operates.
However, researchers countered that while fully nude images were uncommon, Grok had complied with requests to modify images of minors by placing them in revealing outfits such as underwear or bikinis and positioning them in sexually suggestive poses. Creating such non-consensual images may still qualify as CSAM and could result in severe penalties under the Take It Down Act, signed into law last year by President Donald Trump.

California Attorney General Rob Bonta announced on Wednesday that his office has launched an investigation into the spread of non-consensual sexually explicit material generated using Grok.
Meanwhile, Grok remains banned in Indonesia and Malaysia due to the controversy. In the United Kingdom, media regulator Ofcom confirmed earlier this week that it has opened a formal investigation into X. Prime Minister Keir Starmer’s office said on Wednesday that he welcomed reports indicating the platform is taking steps to address the problem.

