Elon Musk’s social media platform X has announced a series of changes to its AI chatbot Grok, aiming to prevent the creation of nonconsensual sexualized images, including content that critics and authorities say amounts to child sexual abuse material (CSAM).
The announcement was made Wednesday via X’s official Safety account, following weeks of growing scrutiny over Grok AI’s image-generation capabilities and reports of nonconsensual sexualized content.
X Reiterates Zero Tolerance Policy on CSAM and Nonconsensual Content
In its statement, X emphasized that it maintains “zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.” The platform said it continues to remove high-priority violative content, including CSAM, and to take enforcement action against accounts that violate X’s rules. Where required, accounts seeking child sexual exploitation material are reported to law enforcement authorities.
The company acknowledged that the rapid evolution of generative AI presents industry-wide challenges and said it is actively working with users, partners, governing bodies, and other platforms to respond more quickly as new risks emerge.
Grok AI Image Generation Restrictions Expanded
As part of the update, X said it has implemented technological measures to restrict Grok AI from editing images of real people into revealing clothing, such as bikinis. These restrictions apply globally and affect all users, including paid subscribers.
In a further change, image creation and image editing through the @Grok account are now limited to paid subscribers worldwide. X said this step adds an additional layer of accountability by helping ensure that users who attempt to abuse Grok in violation of laws or platform policies can be identified.

X also confirmed the introduction of geoblocking measures in certain jurisdictions. In regions where such content is illegal, users will no longer be able to generate images of real people in bikinis, underwear, or similar attire using Grok AI. Similar geoblocking controls are being rolled out for the standalone Grok app by xAI.
Announcement Follows Widespread Abuse Reports
The update comes amid a growing scandal involving Grok AI, after thousands of users were reported to have generated sexualized images of women and children using the tool. Numerous reports documented how users took publicly available images and used Grok to depict individuals in explicit or suggestive scenarios without their consent.
Particular concern has centered on a feature known as “Spicy Mode,” which xAI developed as part of Grok’s image-generation system and promoted as a differentiator. Critics say the feature enabled large-scale abuse and contributed to the spread of nonconsensual intimate imagery.
According to one analysis cited in media reports, more than half of the approximately 20,000 images generated by Grok over a recent holiday period depicted people in minimal clothing, with some images appearing to involve children.
U.S. and European Authorities Escalate Scrutiny
On January 14, 2026, ahead of X’s announcement, California Attorney General Rob Bonta confirmed that his office had opened an investigation into xAI over the proliferation of nonconsensual sexually explicit material produced using Grok.
In a statement, Bonta said reports describing the depiction of women and children in explicit situations were “shocking” and urged xAI to take immediate action. His office is examining whether and how xAI may have violated the law.
Regulatory pressure has also intensified internationally. The European Commission confirmed earlier this month that it is examining Grok’s image-generation capabilities, particularly the creation of sexually explicit images involving minors. European officials have signaled that enforcement action is being considered.
App Store Pressure Adds to Challenges
On January 12, 2026, three U.S. senators urged Apple and Google to remove X and Grok from their app stores, arguing that Grok AI has repeatedly violated app store policies related to abusive and exploitative content. The lawmakers warned that app distribution platforms may also bear responsibility if such content continues.
Ongoing Oversight and Industry Implications
X said the latest changes do not alter its existing safety rules, which apply to all AI prompts and generated content, regardless of whether users are free or paid subscribers. The platform stated that its safety teams are working continuously to add safeguards, remove illegal content, suspend accounts where appropriate, and cooperate with authorities.
As investigations continue across multiple jurisdictions, the Grok controversy is becoming a defining case in the broader debate over AI safety, accountability, and the protection of children and vulnerable individuals in the age of generative AI.
