SesameOp Backdoor Abused OpenAI Assistants API for Remote Access – Hackread – Cybersecurity News, Data Breaches, Tech, AI, Crypto and More

SesameOp Backdoor Abused OpenAI Assistants API for Remote Access – Hackread – Cybersecurity News, Data Breaches, Tech, AI, Crypto and More

Cybersecurity researchers have identified a new backdoor called SesameOp that uses the OpenAI Assistants API to exchange instructions and data, replacing the typical attacker-controlled servers with a legitimate cloud service.

According to Microsoft’s Detection and Response Team (DART), the findings show a growing trend where threat actors use trusted technologies to hide malicious traffic. SesameOp doesn’t exploit a vulnerability in OpenAI products; instead, it misuses an available feature to communicate once systems are compromised.

The investigation began after analysts examined modified Microsoft Visual Studio utilities that loaded unusual libraries. This led to the discovery of Netapi64.dll, an obfuscated loader that runs a hidden .NET-based component named OpenAIAgent.Netapi64.

The malware maintains persistence and allows remote operators to issue commands, gather results, and send them back through the OpenAI API as if they were ordinary data exchanges.

Microsoft found that the backdoor stores and retrieves instructions by creating and managing custom “Assistants” within an OpenAI account. These Assistants act as placeholders for encoded messages labeled with terms such as “SLEEP,” “Payload,” and “Result.” Each step of communication is encrypted, compressed, and Base64-encoded to limit visibility and evade inspection.

Further analysis showed that SesameOp applies a .NET AppDomainManager injection technique to load its code at runtime and execute payloads through a JavaScript engine embedded in memory. The design points to long-term persistence and espionage motives, rather than broad financial attacks.

Following the report, Microsoft collaborated with OpenAI to disable the API key and account used by the attacker. Both companies confirmed that the activity was limited to API calls and did not involve any access to model data or user information.

Microsoft said that the issue is not a flaw in OpenAI’s systems but a demonstration of how attackers adapt legitimate tools for covert use. The company advises organizations to audit server logs, apply strict proxy and firewall controls, and monitor for connections to api.openai.com originating from unexpected processes.

Nevertheless, legitimate cloud services, including AI platforms, are becoming attractive channels for threat actors who want to avoid building their own infrastructure. Therefore, companies need to be on the lookout and monitor their systems accordingly. For technical details on the SesameOp Backdoor operation, visit Microsoft’s blog post here.





Source link