PromptPwnd Vulnerability Exposes AI driven build systems to Data Theft – Hackread – Cybersecurity News, Data Breaches, Tech, AI, Crypto and More

PromptPwnd Vulnerability Exposes AI driven build systems to Data Theft – Hackread – Cybersecurity News, Data Breaches, Tech, AI, Crypto and More

Researchers at the software security company Aikido Security have reported a new type of vulnerability that could compromise how major firms build their software. They’ve named this issue PromptPwnd, and it centres on a specific type of attack called prompt injection, where AI agents like Gemini, Claude Code, and OpenAI Codex are being used inside automated systems like GitHub Actions and GitLab CI/CD.

Why AI Automation is Suddenly Risky

For your information, these automated CI/CD pipelines use AI to speed up tasks like managing bug reports. The flaw begins when AI agents receive outside text (like a bug report title), allowing an attacker to slip secret instructions into the prompt. This technique, called prompt injection, confuses the AI agent, causing it to mistake the attacker’s text for a direct command and run privileged tools.

This simple pattern of injecting untrusted text into the AI’s prompt lets attackers steal security keys or modify the code workflows. This new vulnerability shows that relying on these automated systems can backfire, especially since these same systems were recently targeted in attacks like Shai-Hulud 2.0.

Aikido Security was the first to identify this vulnerability pattern and immediately open-sourced Opengrep rules to help all security vendors and organisations trace this flaw in their own code.

PromptPwnd Vulnerability Exposes AI driven build systems to Data Theft – Hackread – Cybersecurity News, Data Breaches, Tech, AI, Crypto and More
The PromptPwnd Attack Chain (Source: Aikido Security)

Real-World Companies Were Exposed

Aikido Security confirmed the exposure in at least five Fortune 500 companies, and they believe many more are at risk. In the blog post shared with Hackread.com, researchers confirmed the attack chain is “practical, reproducible, and already present in real-world GitHub Actions workflows.”

In a notable case, Google’s own Gemini CLI repository was affected. Google moved quickly to fix the issue, patching it within four days after Aikido Security responsibly shared their findings. It is worth noting that this is one of the first times we’ve seen confirmed proof that AI prompt injection can directly break critical software pipelines.

The same risk was found in other popular AI tools like Claude Code Actions and OpenAI Codex Actions. While these tools have built-in safety rules (like needing specific user permissions), researchers found that if companies turn off these rules with a simple configuration change, it becomes easy for an outside attacker to steal the very important GITHUB_TOKEN.

Given this widespread risk, for anyone running these automated AI tools, security experts advise immediately limiting the powerful tools AI agents have access to and making sure you never inject untrusted user input directly into AI prompts.





Source link