Firebase Misconfiguration Exposes 300M Messages From Chat & Ask AI Users


A massive security failure has put the private conversations of millions at risk after an unprotected database was left accessible online. Discovered by an independent researcher, the leak exposed roughly 300 million messages from more than 25 million users of Chat & Ask AI, a popular app with over 50 million downloads across the Google Play and Apple App Stores.

The app is owned by Codeway, a Turkish technology firm founded in Istanbul in 2020, and acts as a ‘wrapper’, allowing a single gateway for users to interact with famous AI models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude. Because it serves as a gateway to multiple systems, a single technical slip-up can have a massive impact on the privacy of its global user base.

A Simple Door Left Open

This was not a complex hack, as it was caused by a well-known technical error called a Firebase misconfiguration. Firebase is a Google service used to manage app data, but here, the ‘Security Rules’ were mistakenly set to public. This effectively left the digital front door wide open, allowing anyone to read or delete data without a password.

The researcher, known as Harry, noted the data included full chat histories and the specific names users gave to their AI bots. Also, the files contained ‘deeply personal and disturbing requests’ like ‘discussions of illegal activities and requests for suicide assistance’. As many treat these bots as private journals, this exposure is a major concern.

Not The First Time

This is not the first time an AI chat platform has faced a data exposure incident. Earlier, OmniGPT suffered a breach that exposed sensitive user information, showing how quickly privacy risks escalate when AI tools are deployed without strict backend safeguards.

While the technical causes may vary, these incidents highlight a recurring pattern where traditional application security failures intersect with AI services that store highly personal conversations, increasing the impact far beyond a typical data leak.

Lessons for AI Users

This discovery led Harry to dig deeper. He built a tool to scan other apps for the same weakness and found that 103 out of 200 iOS apps he tested had the same flaw, exposing tens of millions of files. To help the public, he created a website where users can check if their apps are at risk.

Harry also alerted Codeway to the issue on 20 January 2026. While the company reportedly fixed the error across all its apps within hours of the report, the database may have been vulnerable for a long period before it was secured. Once information is exposed on the open internet, it is difficult to determine if other parties copied it before the leak was finally plugged. This discovery proves that, at the end of the day, your private data is only as secure as a single developer’s checklist.

Screenshot shows redacted preview of the exposed information (Image credit: Hackread.com)

To protect yourself, avoid using your real name or sharing sensitive documents like bank statements with any chatbot. It is also wise to stay logged out of social media while using these tools to prevent your identity from being linked to your chats. Above all, treat every conversation as if it could one day be public, and be extremely cautious of what you share.

Speaking to Hackread.com, James Wickett, CEO of DryRun Security, explained that these risks become very real once AI is used in actual products. He noted that the “recent AI chat app breach” was not a novel exploit, but a “familiar backend misconfiguration, made far more dangerous by the sensitivity of the data involved.”

“Prompt injection, data leakage, and insecure output handling stop being academic once AI systems are wired into real products, because at that point the model becomes just another untrusted actor in the system. Inputs are tainted, outputs are tainted, and the application has to enforce boundaries explicitly rather than assuming good behavior,” James added.

The recent AI chat app breach that exposed roughly 300 million private messages tied to 25 million users wasn’t a novel AI exploit; it was a familiar backend misconfiguration, made far more dangerous by the sensitivity of the data involved. This is the frontier of application security in 2026, where traditional appsec failures collide with AI systems at scale, and where most of the real risk is now concentrated,” he explained.





Source link