To talk dirty to ChatGPT, you may soon have to show it your driver’s license.
OpenAI announced last month that ChatGPT will soon offer erotica—but only for verified adults. That sounds like a clever guardrail until you realize what “verified” might mean: uploading government identification to a company that already knows your search history, your conversations, and maybe your fantasies.
It’s a surreal moment for technology. The most famous AI tool in the world is turning into a porn gatekeeper. And it’s not happening in a vacuum. California just passed a law requiring age checks for app downloads. Discord’s age-verification partner was hacked this summer, exposing 70,000 government-issued IDs that are now being used for extortion. Twenty-four US states have passed similar laws.
What began as an effort to keep kids off adult sites has quietly evolved into the largest digital ID system ever built. One we never voted for.
The normalization of online ID checkpoints
Age verification started as a moral crusade. Lawmakers wanted to protect minors from explicit material. However, every system that requires an ID online transforms into something else entirely: a surveillance checkpoint. To prove you’re an adult, you hand over the same information criminals and governments dream of having—and to a patchwork of private vendors who store it indefinitely.
We’ve already seen where that leads. In the UK, after age-gating rules took effect under the Online Safety Act, one of the verification companies was breached. In the US, the AU10TIX breach exposed user data from Uber, X, and TikTok. Each time, the same story: people forced to upload passports, driver’s licenses, or selfies, only to watch that data leak.
If hackers wanted to design a dream scenario for mass identity theft, this would be it. Governments legally requiring millions of adults to upload the exact documents criminals need.
The illusion of safety
The irony is that none of this actually protects children. In the UK, VPN sign-ups spiked 1,400% the day the new restrictions went live. We hope that’s from adults balking at handing over personal data, but the point is any teen with a search bar can bypass an age-gate in minutes. The result isn’t a safer internet—it’s an internet that collects more data about adults while pushing kids toward sketchier, unregulated corners of the web.
Parents already have better options for keeping inappropriate content at bay: device-level controls, filtered browsers, phones built for kids. None of those require turning the rest of us into walking ID tokens.
From bars to browsers
Defenders like to compare online verification to showing ID at a bar. But when you flash your license to buy a beer, the cashier doesn’t scan it, store it, and build a permanent record of your drinking habits. Online verification does exactly that. Every log-in becomes another data point linking your identity to what you read, watch, and say.
It’s not hard to imagine how this infrastructure expands. Today it’s porn, violence, and “mature” chatbots. Tomorrow it could be reproductive-health forums, LGBTQ+ resources, or political discussion groups flagged as “sensitive.” Once the pipes exist, someone will always find a new reason to use them.
When innovation starts to feel invasive
Let’s be honest. We could all make money if we just decided to build porn machines, and that’s what this new offering from ChatGPT feels like. It didn’t take long for AI to grab a slice of the OnlyFans market. Except the price of admission isn’t only $20 a month; it’s potentially your identity and a whole lot of heartache.
As Jason Kelley of the Electronic Frontier Foundation explained on my Lock and Code podcast,
“Once you are asked to give certain types of information to a website, there’s no way to know what that company, who’s supposedly verifying your age, is doing with that information.”
The verification process itself becomes a form of surveillance, creating detailed records of legal adult behavior that governments and cybercriminals can exploit.
This is how surveillance gets normalized: one “safety” feature at a time.
ChatGPT’s erotic mode will make ID-upload feel routine—a casual step before chatting with your favorite AI companion. But beneath the surface, those IDs will feed a new class of data brokers and third-party verifiers whose entire business depends on linking your real identity to everything you do online.
We’ve reached the point where governments and corporations don’t need to build a single centralized database; we’re volunteering one piece at a time.
ChatGPT’s latest intentions are a preview of what’s next. The internet has been drifting toward identity for years—from social logins to verified profiles—and AI is simply accelerating that shift. What used to be pockets of anonymity are becoming harder to find, replaced by a web that expects to know exactly who you are.
The future of “safe” online spaces shouldn’t depend on handing over your driver’s license to an AI.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.
