A new malware-based credential-stealing campaign, which researchers are calling “DeepLoad,” has been infecting enterprise business IT environments over the past
In a report released Monday, ReliaQuest AI researchers Thassanai McCabe and Andrew Currie say the most relevant feature of this attack is the way it uses artificial intelligence and other engineering “to defeat the controls most organizations rely on, turning one user action into persistent, credential-stealing access.”
DeepLoad is delivered to victims via “QuickFix” social-engineering techniques, such as fake browser prompts or error pages. If the user falls for the scheme, the malware developers — or more likely their AI tools — put a lot of work into building evasion of security technology “at every stage” of the attack chain.
The loader “buries functional code under thousands of meaningless variable assignments,” and the payload runs behind a Windows lock screen process that is “overlooked by security tools” monitoring for threats. ReliaQuest said “the sheer volume” of code padding likely rules out human-only involvement.
“We assess with high confidence that AI was used to build this obfuscation layer,” McCabe and Currie write. “If so, organizations should expect frequent updates to the malware and less time to adapt detection coverage between waves.”
DeepLoad can steal credentials through real-time keylogging, and even if security teams block the initial loader, it was able to persist through backup contingencies.
“In the incidents we investigated, the loader spread to connected USB drives, which means the initial host is unlikely to be the only impacted system,” McCabe and Currie wrote. “Even after cleanup, a hidden persistence mechanism not addressed by standard remediation workflows re-executed the attack three days later.”
ReliaQuest’s research offers more evidence that over the past year, some traditional static cybersecurity practices — such as searching for malware signatures or file-based patterns — may be fast becoming obsolete, as AI models can spin out endless variations of attack tooling with unique signatures.
Other organizations like Google and Anthropic have been sounding the alarm that AI-enhanced cyberattacks are dramatically shrinking the time defenders must respond to a compromise.
At the RSA Conference in San Francisco this year, experts told CyberScoop that the next two years are set to be a “perfect storm” favoring AI-powered offense, with cybercriminals and nation-states more quickly adapting the technology to add greater speed and scale to their attacks than their defensive counterparts.
McCabe and Currie say the likely continued use of AI to frustrate static analysis monitoring means that defenders will need to shift focus to other indicators of compromise.
“Based on what we’ve observed, organizations must prioritize behavioral, runtime detection—not file-based scanning—to catch this campaign (and similar ones) early,” they wrote.

