Securityaffairs

Google warns artificial intelligence is accelerating cyberattacks and zero-day exploits


Google warns artificial intelligence is accelerating cyberattacks and zero-day exploits

Pierluigi Paganini
May 11, 2026

Google says hackers now use AI to create exploits, automate attacks, evade defenses, and target AI supply chains at scale.

Artificial intelligence is rapidly changing the cyber threat landscape, and a new report from the Google Cloud Threat Intelligence team highlights how attackers already use AI to improve vulnerability exploitation and gain initial access to cloud environments.

The report shows a clear shift in attacker behavior. Attackers now target software flaws and cloud services more than stolen credentials or phishing, making vulnerability exploitation a top entry method.

One of the most important findings concerns the growing role of AI in offensive operations. Attackers no longer use AI only to write phishing emails or automate repetitive tasks. They now experiment with AI systems capable of identifying vulnerabilities, generating exploit code, and accelerating attack chains.

Google researchers warn that the industry is entering a new phase of AI-enabled cybercrime. The report notes that threat actors increasingly integrate AI throughout the attack lifecycle, from reconnaissance to exploitation and malware development.

“AI-enabled malware, such as PROMPTSPY, signal a shift toward autonomous attack orchestration, where models interpret system states to dynamically generate commands and manipulate victim environments.” reads the report published by Google. “Our analysis of this malware reveals previously unreported capabilities and use cases for its integration with AI. This approach allows threat actors to offload operational tasks to AI for scaled and adaptive activity.”

Researchers warn that threat actors no longer use AI only to improve productivity. Cybercriminals and state-backed groups now test AI systems that can adapt during attacks, automate decisions, accelerate operations, and support tasks once handled only by human operators, marking a major shift in modern cyber operations.

The report also describes how attackers exploit newly disclosed vulnerabilities much faster than before. In some cases, criminals start scanning the internet for exposed systems within hours or days after security researchers publish technical details. That acceleration leaves defenders with very little time to patch systems before attackers strike.

Google identified the first known AI-developed zero-day exploit tied to a planned mass attack. Chinese and North Korean actors also show strong interest in using AI to discover vulnerabilities.

“For the first time, GTIG has identified a threat actor using a zero-day exploit that we believe was developed with AI. The criminal threat actor planned to use it in a mass exploitation event but our proactive counter discovery may have prevented its use.” continues the report. “Threat actors associated with the People’s Republic of China (PRC) and the Democratic People’s Republic of Korea (DPRK) have also demonstrated significant interest in capitalizing on AI for vulnerability discovery. “

artificial intelligence vulnerabilities discovery

Google found that attackers increasingly use software flaws to breach cloud environments, targeting APIs, SaaS apps, developer platforms, and AI services.

AI plays an important role in this acceleration. Large language models (LLMs) help attackers analyze technical documentation, understand proof-of-concept exploits, and generate malicious scripts faster than traditional methods allowed. Researchers increasingly fear that AI could reduce the technical barrier required to launch sophisticated attacks.

The report highlighted another critical issue: attackers increasingly target the broader AI ecosystem rather than AI models alone. Exposed API keys, insecure integrations, excessive permissions, and vulnerable third-party tools create new attack surfaces.

Recent investigations revealed cases where exposed Google Cloud API keys unintentionally granted access to Gemini AI services after configuration changes. Security researchers found thousands of publicly exposed keys that attackers could abuse to access sensitive AI endpoints or generate massive cloud costs.

Google also expanded its detection capabilities to monitor AI-related threats inside cloud environments. The company now tracks suspicious activity involving AI services, including abnormal service account usage, unusual AI API calls, malicious binaries, reverse shells, and data exfiltration attempts targeting AI workloads.

“Adversaries like “TeamPCP” (aka UNC6780) have begun targeting AI environments and software dependencies as an initial access vector. These supply chain attacks result in multiple types of machine learning (ML)-focused risks outlined in the Secure AI Framework (SAIF) taxonomy, namely Insecure Integrated Component (IIC) and Rogue Actions (RA).” continues the report. “Our analysis of forensic data associated with these attacks reveals threats actors attempting to pivot from compromised AI software to broader network environments for initial access and to engage in disruptive activities, such as ransomware deployment and extortion.”

The report states that software-based entry has become one of the dominant intrusion methods in cloud attacks. This trend reflects the increasing difficulty of stealing credentials from organizations that adopted MFA and stronger identity protections. Attackers instead focus on unpatched software, insecure APIs, and third-party integrations.

Another major concern involves autonomous AI-assisted attacks. Researchers and security companies already documented early cases where AI systems conducted reconnaissance, vulnerability scanning, and exploitation with limited human supervision. Anthropic recently disclosed an incident involving an AI-orchestrated cyberattack allegedly linked to a state-sponsored Chinese group. According to the company, the attackers used AI tools for reconnaissance, credential theft, and data exfiltration.

Although fully autonomous cyberattacks remain limited, Google researchers believe the trend will continue. AI systems increasingly support attackers by shortening operational timelines and improving scalability.

The report also examined how threat actors interact with generative AI systems. Google found that many attackers attempt to bypass AI safety protections using jailbreak prompts and prompt engineering techniques. However, most attempts remain unsophisticated and rely on publicly available methods rather than advanced AI manipulation.

Importantly, the report stressed that AI does not replace traditional attack techniques. Many successful breaches still originate from common security failures such as misconfigurations, exposed services, weak access controls, and poor patch management. A separate report from Wiz found that basic security mistakes still contribute to most cloud breaches.

The researchers also emphasized that defenders can use AI to strengthen security operations. AI tools already help analysts process telemetry, prioritize alerts, identify suspicious patterns, and accelerate incident response. However, the same technologies remain available to attackers.

One of the clearest warnings from the report appears in the statement that the cloud threat landscape is rapidly shifting. That shift no longer concerns only malware or phishing. It involves the convergence of AI, cloud infrastructure, automation, and software exploitation into a faster and more scalable attack model.

The overall message from Google’s analysis is clear: organizations can no longer treat AI security as a future problem. Attackers already use AI to improve operations, accelerate exploitation, and target cloud ecosystems. Companies must strengthen vulnerability management, secure APIs and AI integrations, monitor third-party relationships, and reduce exposure windows before attackers exploit them.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Artificial intelligence)







Source link