A new report from Booz Allen Hamilton warns that cybersecurity is entering a ‘machine-speed’ era where AI (artificial intelligence) is collapsing the time between intrusion and impact, allowing attackers to plan, test, and execute multi-stage operations in minutes with minimal human input. The analysis finds that threat actors are adopting AI faster than defenders, using it to rapidly identify vulnerabilities, establish persistence, and scale attacks, while most security operations still rely on slower, human-driven processes that struggle to keep pace.
Titled ‘When Cyberattacks Happen at AI Speed,’ the report highlights a widening speed gap that is reshaping cyber risk across sectors, particularly in critical infrastructure, where traditional detect-and-respond models are no longer sufficient against continuously evolving, AI-enabled threats. As adversaries automate the full attack lifecycle and operate at machine speed once inside networks, organizations are being forced to rethink cybersecurity architectures, shifting toward real-time, AI-driven defense models capable of matching the tempo and scale of modern attacks.
In 2026, Booz Allen notes that AI-enabled threats are outpacing cyber defenses, creating unprecedented risks to national security and economic stability. “Government agencies, private companies, and the information technology (IT) and operational technology (OT) systems they depend on are all seen as one connected target by attackers. Yet most cyber defenses still run on a human timeline: triage in hours, remediation in days, patching in weeks.”
Highlighting that attackers now operate on a different clock, one that moves much faster, Booz Allen reported that the shift accelerating cyber threats is the rise of AI agents, as the software can carry out multi-step tasks with minimal guidance. A human sets the objective.
“The AI agent chooses tools, runs actions, reads results, and keeps iterating until it reaches the objective. This dramatically increases the pace and scale of attacks,” the report detailed,” according to the report. “AI agents can scan more targets, test more options, and move from a new idea to a working intrusion far faster than human operators alone. Large enterprises often run tens of thousands of endpoints and workloads, far more than traditional investigation and response processes can monitor in real time. Incidents can begin before patches are deployed. Even when defenders and attackers learn about vulnerabilities at the same time, AI-enabled attackers can exploit them within hours while defenders are still determining exposure and rolling out fixes.”
Booz Allen reported that the convergence of AI and cyber threats introduced a new operational tempo that organizations are struggling to match. CISA mandates a 15-day remediation window for critical vulnerabilities, yet 60% of those vulnerabilities remain unmitigated after that deadline, a gap that adversaries are actively exploiting. The cost barrier to launching sophisticated attacks has collapsed, with the average cost to auto-generate a CVE exploit now just $2.77, and AI tools have already demonstrated the ability to identify 500 zero-day exploits in open-source code.
The speed of exploitation has become equally alarming; in August–September 2025, the threat actor HexStrike exploited CVE-2025-7775 across more than 8,000 endpoints in under 10 minutes, effectively replacing the need for human hackers. This acceleration did not happen overnight. The timeline of AI-driven attacks stretches back to May 2023, when AI-forged OAuth tokens enabled Storm-0558 to breach Microsoft Azure and compromise more than 25 U.S. government organizations.
By late 2023, the report said that MSS-linked groups were already using large language models such as GPT-4 and Ernie for mass AI-generated spearphishing campaigns. In November 2024, a Google Gemini agent autonomously discovered a SQLite zero-day, marking the first AI-found CVE. In the following month, a PRC APT breached the U.S. Treasury via BeyondTrust, exfiltrating more than 3,000 files from OFAC.
The pace intensified further in 2025, with the July release of VILLAGER, an AI-native pentesting tool built on DeepSeek v3 with 4,201 exploit prompts, followed by a September attack in which jailbroken Claude Code executed an autonomous campaign against 30 targets with minimal human input. By January 2026, CVE-GENIE was reproducing 51% of CVEs as working exploits using chained AI models, effectively removing the barrier to entry for sophisticated cyberattacks at scale.
Booz Allen recognizes that most cyber defenses still run on human timelines. Analysts review alerts, teams escalate incidents, and leaders weigh operational risk before approving containment actions. That process can take hours or days because it was designed for threats that unfold slowly and allow time for investigation. AI-enabled attackers move in minutes. When attackers move faster than defenders, they control the fight.
Furthermore, the gap between speed of attack and time to respond is widening. “In 2025, the average breakout time from initial access to ability to move into other systems dropped to under 30 minutes, with the fastest cases measured in seconds. AI-enabled tools now automate reconnaissance, generate exploits, and scan thousands of systems simultaneously. Small teams—or even single operators—can run campaigns that once required large, coordinated groups of specialists.”
The report mentioned that AI is also accelerating how offensive tools are built. Operators define the objective and constraints, and language models generate code, test it, and refine it until it works. Development cycles that once took weeks now take hours. Social engineering has always worked; AI allows attackers to produce convincing emails, documents, and tailored personas at an industrial scale. Clearly, techniques that once remained inside elite units spread quickly across criminal ecosystems.
The Booz Allen report outlines three key decisions organizations must make to keep pace with AI-driven threats. First, cyber defense must move to AI speed, meaning early containment actions such as isolating systems, blocking malicious traffic, revoking suspicious sessions, and initiating remediation cannot wait for manual approval and must occur automatically within defined limits while an intrusion is still unfolding.
Second, organizations must treat AI platforms as critical infrastructure, as these systems increasingly centralize data, identity, and workflow authority, creating new entry points for attackers and requiring enforceable security baselines for access, integration, and monitoring.
Third, organizations need to adopt a human-AI teaming model in which AI systems can draft detections, investigate alerts, and trigger containment or remediation in seconds, allowing human analysts to focus on complex investigations and disrupting sophisticated campaigns, while leadership aligns on what actions can be automated and how to manage the operational risks involved.
To move cyber defense to AI speed, the Booz Allen report argues that containment must begin while an intrusion is still unfolding, rather than waiting for human investigation or approval. Organizations should preapprove automated response actions, such as isolating compromised systems, blocking malicious traffic, revoking suspicious access, and preventing risky changes, triggered when defined thresholds are met. This requires integrating and AI-enabling security and network operations to eliminate silos and accelerate response.
While automation may introduce false positives, the priority shifts toward rapid containment over perfect accuracy, supported by clear boundaries, rollback mechanisms, and full auditability. The report also emphasizes the need for aligned legal and governance frameworks, investment in tools that enable automated enforcement at scale, and adoption of zero trust principles to reduce attack surfaces and limit attacker movement.
To secure AI platforms as critical infrastructure, the Booz Allen report stresses that voluntary guidance is no longer sufficient given the level of risk these systems introduce. Organizations must establish enforceable, testable security baselines before trusting AI platforms in enterprise environments, including strong authentication, detailed activity logging, secure management of sensitive data and credentials, strict controls over integrations, and secure-by-default configurations.
It also calls for clear decisions around operational risk, determining which workloads can run on commercial AI services, which require hardened environments, and which must remain in tightly controlled systems due to their sensitivity and mission impact.
To adopt a human-AI teaming model, the Booz Allen report emphasizes that defenders must scale operations by using automated agents to handle routine tasks such as alert triage, detection updates, and initial containment within seconds. This allows human analysts to shift from manual response to supervisory roles, focusing on refining detection logic and handling complex incidents that require judgment.
The model enables a single responder to oversee multiple investigations simultaneously, significantly expanding capacity. However, effective implementation requires clear authority and decision ownership, as delays often stem from unclear approval processes. Leadership must define in advance which actions can be automated, when human intervention is needed, and who has the authority to escalate responses, ensuring teams can act immediately when incidents occur.
The Booz Allen report concludes that while attackers can now plan and execute operations in minutes, many defenders still operate on timelines measured in hours or days, creating a widening gap driven not by lack of capability but by slow modernization. Closing this gap requires transforming cyber defense, as well as IT and OT operations, to operate at AI speed.
The report calls for a set of fundamental shifts, including recognizing the scale and speed of the threat, implementing zero trust at scale, converging and AI-enabling security and network operations, treating AI systems as critical infrastructure with enforceable security controls, and optimizing human-AI teaming. Organizations that make these changes can contain attacks earlier and limit damage, while those that do not risk detecting intrusions only after attackers have already established control.
The report concludes that cybersecurity has become a race against time, with attackers now operating at AI speed, probing thousands of systems and moving through networks in minutes. Defenders that cannot detect, contain, and remediate intrusions within that narrow window risk losing control of their systems while attacks are still in progress. The report makes clear that AI-enabled intrusions are no longer hypothetical, and the defining challenge is whether organizations can respond in time or only after the damage is done.
Last October, Booz Allen warned that the People’s Republic of China (PRC) has developed a sophisticated and persistent cyber acceleration strategy that enables it to conduct global cyber operations with remarkable scale and effectiveness. From infiltrating governments to manipulating supply chains and shaping online narratives, China’s cyber activities are both widespread and impactful. However, the true extent of its success remains poorly understood.

