The cybersecurity industry has been drowning in waves of speculation about the impact of AI-enabled attacks since ChatGPT was launched. Today, that speculation has come crashing down.
AI-enabled cyberwarfare isn’t coming, it’s here. In September 2025, Anthropic reported the first documented case of a large-scale cyberattack executed without substantial human intervention.
Additionally, Armis’ 2026 State of Cyberwarfare Report (PDF) found that 92% of IT decision-makers in the U.S. are concerned about the impact of cyberwarfare on their organizations, with 64% reporting that they have already been impacted by an AI-generated or AI-led attack over the last 12 months.
Attackers are now operating at machine speed, but most defenders remain anchored to human processes and static intelligence. The gap between threat actors and cybersecurity is accelerating. Nearly half (45%) of U.S. IT decision-makers are still detecting and responding to a significant cyberattack as it occurs or after the damage has already been done.
In the face of this widening gap, the most basic vulnerabilities become the most dangerous.
Testifying before the U.S. House Committee on Homeland Security in December 2025, Royal Hansen, Vice President of Privacy, Safety, and Security Engineering at Google, attested, “it is clear that legacy systems, misconfigured cloud environments, and the exploitation of known vulnerabilities remain significant concerns.”
The solution, in Hansen’s words: “AI allows security professionals and defenders to scale and accelerate their work in threat detection, malware analysis, vulnerability detection, vulnerability fixing and incident response.”
As nation-state threat actors deploy autonomous agents to scale their operations, cybersecurity must do the same. The industry needs to pivot toward collective, agentic defense mechanisms; specifically, a “hive mind” architecture to share collective intelligence.
The Rise of the Machines
The democratization of AI-enabled cyberattacks is no longer speculation – it is an observed trajectory.
A Chinese state-sponsored threat actor, GTG-1002, weaponized Claude Code (an agentic coding assistant) into an autonomous attack platform. According to Anthropic, human operators just made four to six strategic decisions per campaign, such as selecting targets and authorizing escalation. The AI executed everything else.
Under the control of GTG-1002, Claude mapped the complete network topology across multiple IP ranges, identified high-value systems, queried databases, extracted data, and parsed results to identify proprietary information. Anthropic estimated that Claude executed 80-90% of the attack independently, issuing thousands of requests per second – “an attack speed that would have been, for human hackers, simply impossible to match.”
As Anthropic noted in its disclosure, any AI model with comparable capabilities could be exploited in the same way. The barrier to conducting these attacks has dropped, and it is not coming back.
A Legacy of Vulnerabilities
The GTG-1002 attack did not emerge in a vacuum. The threat landscape is already full of nation-state threat actors exploiting vulnerable attack surfaces that agentic AI is now positioned to discover and exploit at scale.
For example, Salt Typhoon, another Chinese state-sponsored threat actor, has been active since at least 2019. According to the FBI, the group has breached more than 200 organizations across more than 80 countries. Its primary targets have been telecommunications providers, enabling Chinese intelligence access to call records, text messages, and phone audio from senior government officials.
In February 2026, Michael Machtinger, Deputy Assistant Director for Cyber Intelligence at the FBI, said that “the threat posed by Salt Typhoon actors and the rest of the PRC intelligence apparatus and enabling infrastructure is still very, very much ongoing.”
Like Hansen, Machtinger also contends that “despite all the advances in cybersecurity tools and strategies, it is still the most basic vulnerabilities that provide entry points.”
The problem is clear. Signature-based detection cannot identify polymorphic malware. Manual triage cannot match autonomous reconnaissance. Static intelligence is yesterday’s news and today’s headline. The defenders who rely on traditional solutions cannot prevent the attacks of tomorrow.
Enter “The Hive Mind” – Collective Defense
Cybersecurity must adopt autonomous, distributed, machine-speed intelligence to combat threats that operate in the same way. Ad hoc security tools and siloed threat intelligence cannot match the velocity of agentic cyberattacks.
The agentic era enables a new architecture: a shift to collective defense.
Think of it like Waze for cybersecurity. Organizations can leverage real-time telemetry from millions of signals to identify, contextualize, and respond to threats as they emerge.
Federated learning enables organizations to train shared AI models on distributed datasets without exposing proprietary information. Differential privacy techniques ensure that collective intelligence cannot be reverse engineered to specific organizations.
The industry does not suffer from a shortage of vulnerability data. If anything, they are trying to make sense of too much data. The key is context. Organizations require context within their own environments to prioritize a response to their greatest risks and threats. Collective defense can provide even more context into the behavioral patterns of threats.
Rather than matching known signatures, behavioral analytics identifies anomalous patterns that are indicative of an attack. When one organization encounters a novel attack pattern, the entire collective benefits from the intelligence within seconds, not days.
The cybersecurity response to AI-enabled nation-state threats cannot be incremental. It must be architectural. Collective defense is a force multiplier. The adversary has already automated its offense. Will defenders be able to do the same?
Learn More at the AI Risk Summit at the Ritz-Carlton, Half Moon Bay

