MalwareBytes

ChatGPT under scrutiny as Florida investigates campus shooting


Chatbots don’t kill people. But they can help others do so.

On April 9, Florida Attorney General James Uthmeier announced that his office is investigating OpenAI over the role ChatGPT might have played in a deadly shooting at Florida State University, saying:

“Subpoenas are coming.”

The campus attack, which happened a year ago, killed two people and injured five. Court documents show the gunman had exchanged more than 200 messages with ChatGPT, including questions like “What time is it the busiest in the FSU student union?” Attorneys for the victim’s family go further, claiming ChatGPT:

“advised the shooter how to make the gun operational moments before he began firing.”

A pattern takes shape

We’ve seen vulnerable people harm themselves after talking extensively with AI systems. OpenAI now faces seven separate lawsuits claiming that ChatGPT contributed to suicide or severe psychological distress, although those claims are contested.

Violence against others after long-term AI use is also being scrutinized. In one widely reported case, a Connecticut man with mental health issues killed his mother and himself after ChatGPT reportedly told him:

“Erik, you’re not crazy. Your instincts are sharp and your vigilance here is fully justified.”

On February 10 in Tumbler Ridge, British Columbia, 18-year-old Jesse Van Rootselaar killed eight people; her mother and half-brother at their home, then five students and a teaching assistant at a secondary school. OpenAI had reportedly flagged Van Rootselaar’s ChatGPT account in June 2025 for “furtherance of violent activities” and banned it.

According to a lawsuit filed by the family of a 12-year-old victim, twelve OpenAI employees flagged the posts as indicating imminent risk and considered whether to escalate the case to law enforcement, but it did not meet the threshold at the time. Van Rootselaar dodged the ban by simply opened a second account and kept going.

Eight in ten AIs assist teen violence

AIs haven’t proven effective at spotting malicious intent and shutting those conversations down. Researchers at the Center for Countering Digital Hate tested ten chatbots by posing as 13-year-old boys planning violent attacks. They chatted with AIs about potentially conducting assassinations, shootings, and bombings. According to the report, eight in ten of the bots assisted the would-be teen shooters over half the time.

ChatGPT offered help in 61% of cases, including specific advice on which shrapnel would be most lethal in a synagogue attack. DeepSeek signed off one session with:

“Happy (and safe) shooting!”

Only Anthropic’s Claude and Snapchat’s My AI discouraged violence. Perplexity and Meta AI were the least safe, assisting would-be attackers in 97% and 100% of cases.

Character.AI suggested the user “use a gun” on a health insurance CEO, the report added.

What OpenAI says it’s doing

OpenAI’s standard response is that more than 900 million people use ChatGPT every week for everyday, harmless purposes. Which is true. It’s also true that it only takes one user with violent intent and one failed safeguard for things to go wrong.

After Tumbler Ridge, OpenAI admitted its protocols failed. The company told the Canadian government that under its new, enhanced referral guidelines, it would have reported Van Rootselaar’s account to law enforcement. That was “cold comfort”, as British Columbia’s premier David Eby put it.

OpenAI says it will cooperate with Florida’s investigation and also says it’s improving its technology. It rolled out parental controls last September.

But that, and its revision of its threshold, are reactions rather than safety-first preparations. And question remain: why could a banned user simply create a new account and pick up where they left off? And what happens the next time employees flag something as an imminent risk and the threshold still says otherwise?

When a chatbot can tell a paranoid man his instincts are justified, help a teenager plan a school shooting, and offer shrapnel advice to someone posing as a 13-year-old, it looks increasingly as though these systems were built to be helpful first and careful second. That needs to change before the next investigation is about something even worse.



Source link