CyberSecurityNews

Google Cloud’s Vertex AI platform Vulnerability Allow Attackers to Access Sensitive Data


Artificial intelligence agents are rapidly becoming integral to enterprise workflows, but they also introduce new attack surfaces.

Security researchers recently uncovered a significant vulnerability within Google Cloud Platform’s Vertex AI Agent Engine.

By exploiting default permission scoping, attackers could weaponize deployed AI agents into “double agents” that secretly exfiltrate data and compromise cloud infrastructure.

Exploiting Default Permissions

The core issue lies in the default permissions granted to the Per-Project, Per-Product Service Agent (P4SA) associated with deployed AI agents.

Malicious agent response, containing service agent credentials (Source: Palo Alto)
Malicious agent response, containing service agent credentials (Source: Palo Alto)

Researchers built a test agent using the Google Cloud Application Development Kit and discovered they could easily extract the underlying service agent credentials.

Using these stolen credentials, an attacker could pivot out of the AI agent’s isolated execution context and infiltrate the broader consumer project.

google

This privilege escalation transforms a helpful AI tool into a dangerous insider threat. With the compromised identity, an attacker could execute several malicious actions:

  • Read all data within consumer Google Cloud Storage buckets.
  • Access restricted Google-owned Artifact Registry repositories.
  • Download proprietary container images tied to the Vertex AI Reasoning Engine.
  • Map internal software supply chains to identify further vulnerabilities.

The compromised credentials also granted access to the Google-managed tenant project dedicated to the agent instance.

Reformatted output showing extracted information (Source: Palo Alto)
Reformatted output showing extracted information (Source: Palo Alto)

Within this environment, Palo Alto Networks researchers found sensitive deployment files, including references to internal storage buckets and a Python pickle file.

Python’s pickle module is historically insecure for deserializing untrusted data. If an attacker successfully manipulated this file, they could achieve remote code execution to establish a persistent backdoor.

Additionally, the default OAuth 2.0 scopes assigned to the Agent Engine were found to be dangerously permissive.

These overly broad scopes could, in theory, extend an attacker’s reach beyond the cloud environment into an organization’s Google Workspace applications.

While missing Identity and Access Management permissions prevented immediate access, the wide scopes represented a severe structural security weakness.

Vertex AI Reasoning Engine Service Agent permissions (Source: Palo Alto)
Vertex AI Reasoning Engine Service Agent permissions (Source: Palo Alto)

Enforcing Least Privilege

Following a responsible disclosure process, Google collaborated with the security researchers to mitigate these threats.

Google confirmed that robust controls prevent attackers from altering production base images, blocking potential cross-tenant supply chain attacks.

They also updated their official Vertex AI documentation to increase transparency around resource and account usage.

To properly secure Vertex Agent Engine deployments, organizations must abandon default configurations. Google now recommends a Bring Your Own Service Account (BYOSA) approach.

By replacing the default service agent with a custom account, security teams can strictly enforce the principle of least privilege and grant the AI agent only the exact permissions required to function.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link