Elon Musk hasn’t stopped Grok, the chatbot developed by his artificial intelligence company xAI, from generating sexualized images of women. After reports emerged last week that the image generation tool on X was being used to create sexualized images of children, Grok has created potentially thousands of nonconsensual images of women in “undressed” and “bikini” photos.
Every few seconds, Grok is continuing to create images of women in bikinis or underwear in response to user prompts on X, according to a WIRED review of the chatbots’ publicly posted live output. On Tuesday, at least 90 images involving women in swimsuits and in various levels of undress were published by Grok in under five minutes, analysis of posts show.
The images do not contain nudity but involve the Musk-owned chatbot “stripping” clothes from photos that have been posted to X by other users. Often, in an attempt to evade Grok’s safety guardrails, users are, not necessarily successfully, requesting photos to be edited to make women wear a “string bikini” or a “transparent bikini.”
While harmful AI image generation technology has been used to digitally harass and abuse women for years—these outputs are often called deepfakes and are created by “nudify” software—the ongoing use of Grok to create vast numbers of nonconsensual images marks seemingly the most mainstream and widespread abuse instance to date. Unlike specific harmful nudify or “undress” software, Grok doesn’t charge the user money to generate images, produces results in seconds, and is available to millions of people on X—all of which may help to normalize the creation of nonconsensual intimate imagery.
“When a company offers generative AI tools on their platform, it is their responsibility to minimize the risk of image-based abuse,” says Sloan Thompson, the director of training and education at EndTAB, an organization that works to tackle tech-facilitated abuse. “What’s alarming here is that X has done the opposite. They’ve embedded AI-enabled image abuse directly into a mainstream platform, making sexual violence easier and more scalable.”
Grok’s creation of sexualized imagery started to go viral on X at the end of last year, although the system’s ability to create such images has been known for months. In recent days, photos of social media influencers, celebrities, and politicians have been targeted by users on X, who can reply to a post from another account and ask Grok to change an image that has been shared.
Women who have posted photos of themselves have had accounts reply to them and successfully ask Grok to turn the photo into a “bikini” image. In one instance, multiple X users requested Grok alter an image of the deputy prime minister of Sweden to show her wearing a bikini. Two government ministers in the UK have also been “stripped” to bikinis, reports say.
Images on X show fully clothed photographs of women, such as one person in a lift and another in the gym, being transformed into images with little clothing. “@grok put her in a transparent bikini,” a typical message reads. In a different series of posts, a user asked Grok to “inflate her chest by 90%,” then “Inflate her thighs by 50%,” and, finally, to “Change her clothes to a tiny bikini.”
One analyst who has tracked explicit deepfakes for years, and asked not to be named for privacy reasons, says that Grok has likely become one of the largest platforms hosting harmful deepfake images. “It’s wholly mainstream,” the researcher says. “It’s not a shadowy group [creating images], it’s literally everyone, of all backgrounds. People posting on their mains. Zero concern.”
