A sophisticated AI-generated supply chain attack is targeting researchers, developers, and security professionals through compromised GitHub repositories, according to findings from Morphisec Threat Labs.
The campaign leverages dormant GitHub accounts and polished, AI-crafted repositories to distribute a previously undocumented backdoor known as PyStoreRAT.
Attack Methodology
The attackers employed a carefully orchestrated strategy by reactivating dormant GitHub accounts and publishing seemingly legitimate repositories that appeared to be AI-generated tools or utilities.
Once these repositories gained traction within the developer community, threat actors quietly injected the PyStoreRAT backdoor into the codebase, exploiting the trust developers place in established repositories.
This approach represents an evolution in supply chain attacks, where adversaries weaponize the open-source ecosystem by creating convincing fake projects that initially appear benign.
By targeting the specific audience of researchers and developers who frequently download and test new tools, the campaign maximizes its potential impact within the technology sector.
PyStoreRAT distinguishes itself from conventional malware loaders through its sophisticated capabilities.
The backdoor performs comprehensive system profiling to gather intelligence about infected machines before deploying multiple secondary payloads tailored to the environment.
Notably, PyStoreRAT includes detection logic specifically designed to identify endpoint detection and response (EDR) solutions such as CrowdStrike Falcon.
When security tools are detected, the malware alters its execution path to evade analysis and maintain persistence.
The backdoor also employs rotating command-and-control (C2) infrastructure, making it significantly harder for defenders to block communications and track the threat actors.
Morphisec researchers identified Russian-language indicators within the malware code and associated infrastructure.
The campaign utilized GitHub cluster mapping techniques to identify and target specific developer communities, suggesting a well-resourced and coordinated operation.
Morphisec has published indicators of compromise (IOCs) to assist security teams in detecting and defending against this threat.
Organizations are advised to scrutinize GitHub repositories before integrating code, implement enhanced monitoring for suspicious repository activity, and validate the authenticity of seemingly AI-generated projects.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.
