The Kimsuky APT group has begun leveraging generative AI ChatGPT to craft deepfake South Korean military agency ID cards.
Phishing lures deliver batch files and AutoIt scripts designed to evade anti-virus scanning through sophisticated obfuscation. Organizations must deploy endpoint detection and response (EDR) solutions to unmask hidden scripts and secure endpoints.
On July 17, 2025, the Genians Security Center (GSC) identified a spear-phishing campaign attributed to North Korea’s Kimsuky group impersonating a defense-related institution.
Attackers generated sample military ID card images using ChatGPT and embedded these deepfakes in a spear-phishing lure disguised as an ID issuance review request.
Deepfake, a blend of “deep learning” and “fake,” refers to AI-generated or manipulated media that convincingly imitates real individuals.
First popularized in 2017 for celebrity face swaps, the technology now enables state-sponsored actors to produce counterfeit identification for espionage operations. This report analyzes a real-world AI deepfake exploitation, extracts threat insights, and outlines potential defense measures.
Earlier in July, GSC released its “ClickFix Tactics Analysis Report,” detailing APT phishing that mimicked South Korean portal security functions.
The same malware family—malicious PowerShell commands delivered via popup windows—resurfaced in the deepfake campaign. Prior to the ChatGPT‐driven ID forgery, Kimsuky also sent email subjects touting “AI managing emails on your behalf.”
Meanwhile, U.S. AI vendor Anthropic reported that North Korean IT workers misused generative AI to fabricate virtual identities and complete technical interviews, circumvent international sanctions and earn foreign currency.
South Korea’s Ministry of Foreign Affairs warned that outsourcing to North Korean IT contractors poses IP theft, reputational and legal risks.
These developments underscore how AI tools amplify state-sponsored threat capabilities and necessitate vigilant monitoring across recruitment and operational workflows.
Technical Analysis
Deepfake ID Spear-Phishing
In July, attackers sent an email closely spoofing a military institution’s domain. The message contained a “Government_ID_Draft(***).zip” archive, which unpacked a malicious Windows shortcut (.lnk).
Security-conscious users are usually wary of unfamiliar email attachments and avoid opening them, a fact well understood by APT threat actors.

The .lnk launched cmd.exe to set an environment variable holding an obfuscated string, then extracted characters sequentially to reconstruct and invoke a PowerShell command.
This command contacted a private C2 server to download both the AI-generated ID image and a batch file, ‘LhUdPC3G.bat.’ The image metadata revealed ChatGPT as its origin, and a deepfake detector flagged it with 98% probability.

Since military government employee IDs are legally protected identification documents, producing copies in identical or similar form is illegal.
The batch script employed similar slicing-based obfuscation to reconstruct commands that registered further payload execution via Task Scheduler under the guise of a Hancom Office update.
Obfuscation via AutoIt and Batch Scripts
The initial batch and AutoIt scripts used a rotating Vigenère-style cipher to encrypt strings, thwarting static analysis.
Decompiled AutoIt code revealed functions that applied character shifts based on a repeating key and bit array.
After execution delays of seven seconds—a tactic to bypass sandbox timeouts—the script downloaded and decompressed additional CAB payloads, then scheduled recurring tasks disguised as legitimate software updates.
These layered evasion techniques successfully slip past signature-based anti-virus while complicating manual analysis.
Defense and Recommendations
Signature-only anti-virus solutions struggle to detect deeply obfuscated scripts and AI-generated decoys. Endpoint Detection and Response (EDR) platforms, such as Genian EDR, are essential for:
- Identifying LNK shortcut files and unobfuscated PowerShell commands at execution time.
- Flagging behavior-based anomalies, including decompression of suspicious CAB files and deviation from known application update patterns.
- Visualizing full attack execution chains—regardless of time delays—through storylines that link phishing email arrival, script reconstruction, C2 communication, and scheduled task creation.
By combining real-time monitoring of process trees, environment variable manipulations, and outbound network connections, EDR ensures that obfuscation layers are unraveled and threats neutralized before data exfiltration.
The Kimsuky group’s adoption of ChatGPT for deepfake creation marks a new frontier in APT operations.
Generative AI accelerates the production of convincing decoys, while advanced obfuscation in batch and AutoIt scripts evades traditional defenses.
Organizations must pivot toward robust EDR deployments to detect malicious behaviors hidden within obfuscated code and maintain continuous endpoint security against evolving AI-powered cyber threats.
Find this Story Interesting! Follow us on LinkedIn and X to Get More Instant Updates.
Source link