Meta AI chatbot bug could have allowed anyone to see private conversations
A researcher has disclosed to TechCrunch that he received a $10,000 bounty for reporting a bug that let anyone access private prompts and responses with the Meta AI chatbot.
On June 13, we reported that the Meta AI app publicly exposes user conversations, often without users realizing it. In these cases, the app made “shared” conversations accessible through its Discover feed, so others could easily find them. Meta insisted this wasn’t a bug, even though many people didn’t understand that their conversations were visible to others.
However, Sandeep Hodkasia, the researcher that found the awarded bug, was able to find conversations that weren’t even shared, but “private.” To understand what he did, you need to know that the Meta AI allows users to edit their questions (prompts) to regenerate text and images.
Some of Sandeep’s testing revealed that the chatbot assigned unique numbers to queries that were the results of edited prompts. And by analyzing the network traffic generated by editing a prompt, Sandeep figured out how he could change the unique identification number.
Sending different numbers, which were easy to guess according to Sandeep, allowed him to view a prompt and AI-generated response of someone else entirely. And because the numbers were easy to guess, an attacker could have scraped a host of other users’ conversations with Meta AI.
Meta’s servers failed to check whether the person requesting the information had the authorization to access it.
According to Sandeep, Meta fixed the bug he filed on December 26, 2024, on January 24, 2025. Meta confirmed this date and stated that it found no evidence of abuse.
How to safely use AI
While we continue to argue that the developments in AI are going too fast for security and privacy to be baked into the tech, there are some things to keep in mind to make sure your private information remains safe:
- If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you are not logged in on that social media platform. Your conversations could be tied to your social media account which might contain a lot of personal information.
- When using AI, make sure you understand how to keep your conversations private. Many AI tools have an “Incognito Mode.” Do not “share” your conversations unless needed. But always keep in mind that there could be leaks, bugs, and data breaches revealing even those conversations you set to private.
- Do not feed any AI your private information.
- Familiarize yourself with privacy policies. If they’re too long, feel free to use an AI to extract the main concerns.
- Never share personally identifiable information (PII).
We don’t just report on threats – we help protect your social media
Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.
Source link