Google’s Vertex AI Vulnerability Enables Low-Privileged Users to Gain Service Agent Roles

Google’s Vertex AI Vulnerability Enables Low-Privileged Users to Gain Service Agent Roles

Google’s Vertex AI contains default configurations that allow low-privileged users to escalate privileges by hijacking Service Agent roles.

XM Cyber researchers identified two attack vectors in the Vertex AI Agent Engine and Ray on Vertex AI, which Google deemed “working as intended.

Service Agents are managed identities that Google Cloud attaches to Vertex AI instances for internal operations. These accounts receive broad project permissions by default, creating risks when low-privileged users access them.

Attackers exploit this through confused deputy scenarios, where minimal access grants remote code execution (RCE) and allows credential theft from instance metadata.

Both paths start with read-only permissions but end with high-privilege actions, such as accessing Cloud Storage (GCS) or BigQuery. The diagram illustrates the Ray on Vertex AI flow, from persistent resources access to the Custom Code Service Agent compromise.

Google’s Vertex AI Vulnerability
Feature Vertex AI Agent Engine Ray on Vertex AI
Primary Target Reasoning Engine Service Agent Custom Code Service Agent
Vulnerability Type Malicious Tool Call (RCE) Insecure Default Access (Viewer to Root)
Initial Permission aiplatform.reasoningEngines.update aiplatform.persistentResources.get/list
Impact LLM memories, chats, GCS access Ray cluster root; BigQuery/GCS R/W

Developers deploy AI agents via frameworks like Google’s Agent Development Kit (ADK), which pickles Python code and stages it in GCS buckets. Attackers with aiplatform.reasoningEngines.update permission upload malicious code disguised as a tool, such as a reverse shell in a currency converter function.

google

Google’s Vertex AI Vulnerability
Vulnerability Chain

A query triggers the tool, executing the shell on the reasoning engine instance. Attackers then query metadata for the Reasoning Engine Service Agent token (service-@gcp-sa-aiplatform-re.iam.gserviceaccount.com), gaining permissions for memories, sessions, storage, and logging. This reads chats, LLM data, and buckets. Public buckets work as staging, needing no storage rights, XM Cyber said.

Ray clusters for scalable AI workloads attach the Custom Code Service Agent to the head node automatically. Users with aiplatform.persistentResources.list/get part of Vertex AI Viewer role access the GCP Console’s “Head node interactive shell” link.

Google’s Vertex AI Vulnerability
Vulnerability Chain

This grants root shell access despite viewer limits. Attackers extract the agent’s token via metadata, enabling GCS/BigQuery read-write, though IAM actions like signBlob are scoped-limited in tests. The second diagram shows the pivot to cloud storage and logging.

Revoke unnecessary Service Agent permissions using custom roles. Disable head node shells and validate tool code before updates. Monitor metadata accesses via Security Command Center’s Agent Engine Threat Detection, which flags RCE and token grabs.

Audit persistent resources and reasoning engines regularly. Enterprises adopting Vertex AI must treat these defaults as risks, not features.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link