Threat actors are rapidly weaponizing artificial intelligence to move from initial access to full domain compromise in under half an hour, leaving defenders with almost no room for error or delay.
As enterprises adopt AI across development, identity, and cloud workflows, adversaries are abusing the same tools to script lateral movement, automate reconnaissance, and scale post-exploitation at machine speed.
This compression means a well-prepared, AI-enabled intruder can realistically escalate privileges, discover domain controllers, and seize full domain access in less than half an hour before traditional security teams can triage an alert.
Adversaries are using AI models during live intrusions to generate one‑line commands for reconnaissance, credential harvesting, and data staging directly on compromised systems.
CrowdStrike’s latest threat intelligence shows that the average eCrime “breakout time” – the interval between initial access and lateral movement toward key assets – dropped to just 29 minutes in 2025, with the fastest observed breakout taking a mere 27 seconds.
In one case, the LAMEHUG malware used an LLM via the Hugging Face API to generate commands that enumerated hardware, processes, services, network configuration, and Active Directory domain information based on simple hardcoded prompts, effectively outsourcing reconnaissance logic to an AI backend.
The technology sector remained the most frequently targeted, reflecting its central role in critical business systems and supply chains.

This approach allows a single operator to move faster and adapt in real time without manually crafting every command.
Weaponized AI across the kill chain
Threat actors now integrate AI in multiple phases of the intrusion lifecycle, from initial access to collection.
In comparison to 2024, CrowdStrike Intelligence observed a 563% increase in incidents using fake CAPTCHA lures in 2025.
Moderately resourced eCrime groups such as PUNK SPIDER have used Gemini and DeepSeek to generate post‑exploitation scripts for credential dumping from backup databases and for destroying forensic evidence by terminating services and wiping artifacts.
This scripting support enables even mid‑tier operators to behave like advanced red teams, chaining identity abuse, backup compromise, and domain escalation into a single continuous sequence.
In another observed campaign, attackers abused victims’ own local AI command‑line tools, including Claude and Gemini, by shipping malicious npm packages that instructed these tools to generate commands for stealing authentication material and cryptocurrency.
CrowdStrike responders found more than 90 environments executing this adversary‑originated AI workflow, showing that threat actors are comfortable delegating core post‑exploitation tasks to agentic AI systems running inside victim networks.
State‑aligned actors are experimenting too. Russia‑nexus FANCY BEAR deployed LAMEHUG against Ukrainian government entities, embedding prompts that told the model to list commands to recursively copy office and PDF documents, gather domain information, and write system data to text files for exfiltration.
Forum users mentioned ChatGPT 550% more than any other model. Though users frequently compare ChatGPT to other AI models or generically refer to large language models (LLMs) as ChatGPT, the high number of mentions primarily reflects the model’s general popularity compared to other models.

While this first generation of LLM‑enabled malware did not yet outperform traditional tooling, it demonstrated how quickly reconnaissance, targeting, and staging can be automated once the model is wired into the intrusion toolchain.
AI reconnaissance to domain dominance
Once an intruder has valid credentials and a beachhead, AI‑generated scripts can accelerate the classic path to domain dominance: enumerate domain trusts, identify high‑value accounts and servers, dump LSASS or backup repositories, and abuse misconfigurations in hybrid identity.
In late February 2025, PRESSURE CHOLLIMA executed the largest cryptocurrency theft in history by compromising Safe{Wallet}, a digital asset management platform supporting cryptocurrency exchanges, to target funds held by the centralized cryptocurrency exchange Bybit.

CrowdStrike reports that valid account abuse accounted for 35% of cloud incidents in 2025, underscoring how often attackers are starting from “legitimate” identity footholds rather than obvious malware.
Combined with AI‑assisted discovery, this allows rapid privilege escalation and domain controller targeting, often inside that 30‑minute window.
The broader data shows how this capability is scaling. There was an 89% year‑over‑year increase in attacks by AI‑enabled adversaries in 2025, while 82% of detections were malware‑free, reflecting a decisive shift toward credential‑driven, tool‑based intrusions that blend into normal operations.
In this model, AI is less about novel exploits and more about compressing the time and expertise needed to chain existing techniques into a fast, coherent attack that can seize control of an organization’s entire domain before the SOC can meaningfully respond.
For defenders, this emerging “agentic adversary” era means that detection, investigation, and response must also move at AI speed.
Without cross‑domain telemetry, strong identity security, and automated containment capable of acting within minutes, organizations risk watching AI‑equipped threat actors turn a single compromised account into full domain access in the time it takes to finish a meeting.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.

