65% of Leading AI Companies Exposes Verified Secrets Including Keys and Tokens on GitHub

65% of Leading AI Companies Exposes Verified Secrets Including Keys and Tokens on GitHub

A new security investigation reveals that 65% of prominent AI companies have leaked verified secrets on GitHub, exposing API keys, tokens, and sensitive credentials that could compromise their operations and intellectual property.

The wiz research, which examined 50 leading AI companies from the Forbes AI 50 list, uncovered widespread security vulnerabilities across the industry.

These leaked secrets were discovered in deleted forks, gists, and developer repositories, representing an attack surface that standard GitHub scanning tools routinely overlook.

What Makes this Different

Unlike commodity secret-scanning tools that rely on surface-level GitHub organization searches. The Wiz researchers employed a three-pronged methodology targeting depth, perimeter, and coverage.

Analysis of secrets leak AI companies
Analysis of secrets leak to AI companies

The “Depth” approach examined complete commit histories, deleted forks, workflow logs, and gists, the submerged portion of the security iceberg.

The “Perimeter” dimension expanded discovery to include secrets accidentally committed by organization members to their personal repositories.

google

Meanwhile, “Coverage” addressed detection gaps for emerging AI-specific secret types across platforms such as Perplexity, Weights & Biases, Groq, and NVIDIA.

Among the most impactful leaks were Langsmith API keys granting organization-level access and enterprise-tier credentials from ElevenLabs, discovered in plaintext configuration files.

One anonymous AI50 company’s exposure included a Hugging Face token that provided access to approximately 1,000 private models, alongside multiple Weights and Biases keys that compromised proprietary training data.

Troublingly, 65% of exposed companies were valued at over $400 billion collectively. Yet, smaller organizations proved equally vulnerable, even those with minimal public repositories demonstrated exposure risks.

Wiz experts emphasize the urgent need for action by AI companies. Implementing mandatory secret scanning for public version-control systems is essential and cannot be overlooked.

Establishing proper disclosure channels from inception protects companies during vulnerability remediation. Additionally, AI service providers must develop custom detection for proprietary secret formats, as many leak their own platform credentials during deployment due to inadequate scanning.

The wiz research underscores a critical message: organizational members and contributors represent extended attack surfaces requiring security policies during onboarding.

Treating employees’ personal repositories as part of corporate infrastructure becomes essential as AI adoption accelerates. In an industry racing ahead, the message is clear: speed cannot compromise security.

Comprehensive secret detection must evolve alongside emerging AI technologies to raise organizational defense standards.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

googlenews



Source link