California Attorney General Rob Bonta announced an investigation Wednesday into xAI over allegations that its artificial intelligence model Grok is being used to create nonconsensual sexually explicit images of women and children on a large scale, marking the latest escalation in regulatory efforts to address AI-generated deepfakes.
The California investigation focuses on Grok’s “spicy mode,” a feature designed to generate explicit content that xAI has promoted as a distinguishing characteristic of its platform. According to Bonta’s office, news reports in recent weeks have documented widespread instances of users manipulating ordinary photos of women and children found online to create sexualized images without the subjects’ knowledge or consent.
“The avalanche of reports detailing the non-consensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking. This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet. I urge xAI to take immediate action to ensure this goes no further. We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material,” Bonta said in a release.
The investigation will examine whether xAI violated California law in developing and maintaining features that facilitate the creation of such content. Bonta stated his office would “use all the tools at my disposal to keep California’s residents safe,” though he did not specify which statutes may have been violated.
xAI, founded by Elon Musk, also owns the social media platform X, where Grok-generated images have circulated.
The company has not publicly responded to the investigation announcement. Musk posted Wednesday that he was “ not aware of any naked underage images generated by Grok. Literally zero.”
CyberScoop has reached out to X for comment.
The announcement comes a day after the Senate unanimously passed the DEFIANCE Act, which would grant victims of nonconsensual sexually explicit deepfakes the right to pursue civil action against those who produce or distribute such content. The bill now moves to the House, where similar legislation stalled in 2024 despite Senate approval.
The Senate’s passage of the DEFIANCE Act represents a rare moment of bipartisan consensus on technology regulation. The legislation, introduced by Sens. Dick Durbin, D-Ill., and Lindsey Graham, R-S.C., received no objections during a unanimous consent request Tuesday on the Senate floor.
The bill would establish federal civil liability for individuals who knowingly produce, distribute, or possess with intent to distribute nonconsensual sexually explicit digital forgeries. Rep. Alexandria Ocasio-Cortez, D-N.Y., who has acknowledged being a victim of explicit deepfakes, introduced companion legislation in the House with support from seven Republicans and six Democrats.
The technology to create such content has become increasingly accessible to the general public, lowering barriers that once limited deepfake production to those with specialized technical knowledge.
California has emerged as a focal point for AI regulation, with state lawmakers passing several bills aimed at addressing AI safety concerns. Bonta has been particularly active on issues involving AI and children, meeting with OpenAI executives in September alongside Delaware’s attorney general to discuss concerns about how AI products interact with young people. In August, he sent letters to 12 major AI companies following reports of sexually inappropriate interactions between AI chatbots and children.
California’s investigation comes as the United Kingdom announced earlier this week that it was also conducting its own investigation into the proliferation of deepfakes on X.
