Significant security concerns have been raised regarding the OpenAI ChatGPT app on macOS. The app reportedly stores user conversations in plain text in a non-protected location, sparking a debate about its adherence to macOS’s stringent security protocols.
This practice means that any other running app, process, or malware can potentially access these conversations without any permission prompt or the data stored within them.
The OpenAI ChatGPT app on macOS is not sandboxed and stores all user conversations in plain text at the following location: ~/Library/Application Support/com.openai.chat/conve…{uuid}/
This is demonstrated by Pedro José Pereira Vieito on Threads.
Since the release of macOS Mojave 10.14, six years ago, macOS has implemented robust security measures to block unauthorized access to user private data.
These measures require explicit user permission for any app attempting to access sensitive information, such as:
- Calendar
- Contacts
- Photos
- Documents & Desktop folders
- Any third-party app sandbox
Pereira Vieito explained how he uncovered the original issue. “I was curious about why [OpenAI] opted out of using the app sandbox protections and ended up checking where they stored the app data,” he said. His investigation revealed that OpenAI stores ChatGPT conversations in a non-protected location, making them accessible to any running app, process, or malware.
Are you from SOC/DFIR Teams? - Sign up for a free ANY.RUN account! to Analyse Advanced Malware Files
Despite these built-in defenses, OpenAI chose to opt-out of the macOS sandbox and store conversations in plain text in a non-protected location.
This decision effectively disables the security measures designed to protect user data from unauthorized access.
OpenAI distributes the ChatGPT macOS app exclusively through its own website, bypassing the Mac App Store.
This distribution method allows the app to avoid Apple’s sandboxing requirements, which are mandatory for software distributed via the Mac App Store.
“We are aware of this issue and have released a new version of the application that encrypts these conversations,” OpenAI spokesperson Taya Christianson said to Cyber Security News.
The revelation has led to widespread concern among users and security experts. Many question why OpenAI would bypass such critical security protocols, potentially exposing sensitive user data to malicious actors.
Security experts and tech journalists closely monitor the situation, with many calling for immediate action to address these vulnerabilities.
The incident highlights the ongoing challenges in ensuring data security and the responsibilities of developers in safeguarding user information.
As the debate continues, it underscores the importance of adhering to established security protocols to protect user data.
Both platform providers and app developers must collaborate to ensure robust data protection measures are in place.
"Is Your System Under Attack? Try Cynet XDR: Automated Detection & Response for Endpoints, Networks, & Users!"- Free Demo