“Chat & Ask AI,” a highly popular mobile application available on both Google Play and the Apple App Store, has suffered a significant data exposure.
An independent security researcher discovered a vulnerability that left approximately 300 million private messages accessible to the public.
This breach impacts more than 25 million users, raising serious concerns about privacy and data handling in the booming AI app market.
The exposure was identified by a security researcher known as Harry, who reported his findings to 404 Media.
According to the analysis, the root cause of the leak was not a sophisticated cyberattack but a simple misconfiguration.
The app utilizes Google Firebase, a common platform for mobile app development. Firebase databases are secure by default, but developers must configure specific rules to keep data private.
In this case, the settings were left open, allowing anyone with basic technical knowledge to become an “authenticated” user and access the backend storage.
The scale of the leaked data is massive. Harry reported having access to a database containing the complete chat histories of millions of users.
The exposed files included timestamps, user configurations, the specific AI model selected (such as ChatGPT, Claude, or Gemini), and the names users assigned to their chatbots.
Although the app claims over 50 million users, Harry’s analysis of a sample of 60,000 users and 1 million messages confirmed that the vulnerability affected at least half of them.
The content of the exposed messages highlights the sensitive nature of AI interactions. Users often treat AI chatbots as private confidants, sharing deeply personal or even dangerous queries.
The leaked logs revealed users asking the bot how to write suicide notes, inquiring about painless methods of self-harm, and seeking instructions for illegal activities like manufacturing methamphetamine or hacking software.
This breach demonstrates that “wrapper” apps, which provide a user interface for major AI models like OpenAI’s or Google’s, often lack the robust security infrastructure of the companies whose models they resell.
This incident serves as a critical reminder for both developers and users. For developers, it underscores the need to lock down cloud storage permissions and conduct regular security audits, especially when handling personal data.
For users, it illustrates the risk of sharing sensitive information with third-party AI wrappers. While these apps offer convenient access to powerful AI tools, your data is only as safe as the app’s weakest security setting.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google

