Elon Musk’s X has introduced new restrictions stopping people from editing and generating images of real people in bikinis or other “revealing clothing.” The change in policy on Wednesday night follows global outrage at Grok being used to generate thousands of harmful nonconsensual “undressing” photos of women and sexualized images of apparent minors on X.
However, while it appears that some safety measures have finally been introduced to Grok’s image generation on X, the stand-alone Grok app and website seem to still be able to generate “undress”-style images and pornographic content, according to multiple tests by researchers, WIRED, and other journalists. Other users, meanwhile, say they’re no longer able to create images and videos as they once were.
“We can still generate photorealistic nudity on Grok.com,” says Paul Bouchaud, the lead researcher at Paris-based nonprofit AI Forensics, who has been tracking the use of Grok to create sexualized images and ran multiple tests on Grok outside of X. “We can generate nudity in ways that Grok on X cannot.”
“I could upload an image on Grok Imagine and ask to put the person in a bikini, and it works,” says the researcher who tested the system on a person appearing as a woman. Tests by WIRED, using free Grok accounts on its website in both the UK and US, successfully removed clothing from two images of men without any apparent restrictions. On the Grok app in the UK, when asked to undress a male, the app prompted a WIRED reporter to enter the users’ year of birth before the image was generated.
Meanwhile, other journalists at The Verge and investigative outlet Bellingcat also found it was possible to create sexualized images while being based in the UK, which is investigating Grok and X and has strongly condemned the platforms for allowing users to create the “undress” images.
Since the start of the year, Musk’s businesses—including artificial intelligence firm xAI, X, and Grok—have all come under fire for the creation of nonconsensual intimate imagery, explicit and graphic sexual videos, and sexualized imagery of apparent minors. Officials in the United States, Australia, Brazil, Canada, the Europe Commission, France, India, Indonesia, Ireland, Malaysia, and the UK have all condemned or launched investigations into X or Grok.
On Wednesday, a Safety account on X posted updates on how Grok can be used on the social media website. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis,” the account posted, adding that the rules apply to all users, including both free and paid subscribers.
In a section titled “Geoblock update,” the X account also claimed: “We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it’s illegal.” The company’s update also added that it is working to add additional safeguards and that it continues to “remove high-priority violative content, including Child Sexual Abuse Material (CSAM) and non-consensual nudity.”
Spokespeople for xAI, which creates Grok, did not immediately reply to WIRED’s request for comment. Meanwhile an X spokesperson says they understand the geolocation block to apply to both its app and website.
The latest move follows a widely criticized shift on January 9 where X limited image generation using Grok to paid “verified” subscribers. A a leading women’s group described the act as the “monetization of abuse.” Bouchaud, who says that AI Forensics has gathered around 90,000 total Grok images since the Christmas holidays, confirms that only verified accounts have been able to generate images on X—as opposed to the Grok website or app—since January 9 and that bikini images of women are rarely generated now. “We do observe that they appear to have pulled the plug on it and disabled the functionality on X,” they say.
