Google API Keys Leak Sensitive Data Without Warning via Gemini


Security researchers at Truffle Security discovered that legacy public-facing Google API keys can silently gain unauthorized access to Google’s sensitive Gemini AI endpoints.

This flaw exposes private files, cached data, and billable AI usage to attackers without any warning or notification to developers.

The vulnerability highlights the severe danger of retrofitting modern AI capabilities onto older cloud security architectures.

Leak Sensitive Data

For over a decade, Google explicitly instructed developers to embed API keys directly into client-side code for public services like Google Maps.

These keys were designed primarily as project identifiers for billing and were considered completely safe to share publicly.

thousands of API keys that were deployed as benign billing tokens are now live Gemini credentials sitting on the public internet. ( Source: Truffle Security)

However, when a developer enables the Generative Language API on an existing project, any previously deployed public keys are silently upgraded into highly sensitive credentials.​

This issue stems from insecure defaults and improper privilege assignment within the Google Cloud Platform.

Researchers at Truffle Security said that when a new API key is created, its default architectural state is set to “unrestricted,” making it automatically valid for every enabled API within that project.

Because Google relies on a single key format for both public identification and authentication, there is a dangerous lack of separation between publishable and secret environments.​

Exploiting this privilege escalation flaw is remarkably simple for malicious actors who scrape public code repositories.

Example Google API key in front-end source code used for Google Maps, but also can access Gemini ( Source: Truffle Security)
Example Google API key in front-end source code used for Google Maps, but also can access Gemini ( Source: Truffle Security)

An attacker merely needs to copy an exposed key from a public website and send a direct request to the Gemini platform to gain authorized access.

From there, they can steal private datasets, view cached project contexts, and execute expensive AI queries that could rack up thousands of dollars in usage fees.​

Privilege Escalation Impact

The transition from legacy identifiers to sensitive credentials creates a massive attack surface spanning thousands of vulnerable websites.

Google's Own Keys (Source: trufflesecurity)
Google’s Own Keys (Source: trufflesecurity)

The table below highlights the critical differences between how these keys were originally designed and how they function when Gemini is enabled.​

FeatureLegacy Public API KeysUpgraded Gemini Keys
Primary PurposePublic project identification and billing ​.Sensitive authentication and data access ​.
Deployment LocationClient-side HTML and JavaScript ​.Secure backend environments ​.
Exposure RiskLow risk; harmless if public ​.High risk; exposes private AI data ​.
Default ConfigurationUnrestricted access across projects ​.Unrestricted, requiring manual scoping ​.

Google is addressing this crisis by defaulting new keys in AI Studio to Gemini-only access and proactively blocking known leaked credentials.

Organizations must immediately check all projects for the Generative Language API and audit their dashboard for unrestricted keys.

Security teams should immediately rotate any public-facing keys with Gemini access and use code scanners to detect exposed credentials.​

Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.



Source link