A critical cross-vendor vulnerability class dubbed “Comment and Control” is a new category of prompt injection attacks that weaponizes GitHub pull request titles, issue bodies, and issue comments to hijack AI coding agents and steal API keys and access tokens directly from CI/CD environments.
The attack name is a deliberate play on the classic Command and Control (C2) framework used in malware campaigns. Three widely deployed AI agents, Anthropic’s Claude Code Security Review, Google’s Gemini CLI Action, and GitHub Copilot Agent (SWE Agent), were confirmed vulnerable.
According to researcher Aonan Guan, the entire attack loop runs within GitHub itself: an attacker writes a malicious PR title or issue comment, the AI agent reads and processes it as trusted context, executes attacker-supplied instructions, and exfiltrates credentials back through a PR comment, issue comment, or git commit, no external server required.

Unlike classic indirect prompt injection, which is reactive and requires a victim to explicitly ask the AI to process a document, Comment and Control is proactive: GitHub Actions workflows auto-trigger on pull_request, issues, and issue_comment events, meaning simply opening a PR or filing an issue can activate the agent without any victim interaction.
Finding 1: Claude Code Security Review — PR Title to RCE
In Anthropic’s Claude Code Security Review action, the PR title is directly interpolated into the agent’s prompt with zero sanitization. Because the Claude CLI is invoked without --disallowed-tools or --allowed-tools restrictions, the subprocess inherits all environment variables including ANTHROPIC_API_KEY and GITHUB_TOKEN.
An attacker simply opens a PR with a malicious title that breaks out of the prompt context and instructs Claude to execute whoami, ps auxeww, or env — returning the full credential dump as a JSON “security finding” posted to the PR comment. Rated CVSS 9.4 Critical by Anthropic (upgraded from 9.3), the bug was reported via HackerOne (#3387969) and awarded a $100 bounty.
Google’s Gemini CLI Action includes issue titles, bodies, and comments in the agent’s prompt context under an “Additional Content” section. By injecting a fake “Trusted Content Section” immediately after it, an attacker overrides Gemini’s safety instructions, causing Gemini to post the GEMINI_API_KEY as a public issue comment, visible to anyone.
Reported to Google VRP (#1609699) by researcher Neil Fendley alongside Johns Hopkins University collaborators, the vulnerability earned a $1,337 bounty.
Finding 3: Copilot Agent — Three-Layer Defense Bypass
The GitHub Copilot Agent finding is the most technically sophisticated. GitHub had implemented three runtime-level mitigations environment variable filtering, secret scanning, and network firewall — all of which were bypassed:
- Env Filtering Bypass: The
UU()function filters sensitive variables only from the child bash subprocess. Runningps auxewwfrom that subprocess reads/proc/[pid]/environof the unfiltered parent Node.js process and MCP server, recoveringGITHUB_TOKEN,GITHUB_COPILOT_API_TOKEN, and two additional credentials. - Secret Scanning Bypass: GitHub’s scanner detects
ghs_/ghu_token prefixes. Base64-encoding the output before committing evades all pattern matching. - Firewall Bypass: The encoded credential file is exfiltrated via a standard
git pushto a PR — a whitelisted operation — making it indistinguishable from normal Copilot workflow activity.
Adding another layer of stealth, the attack payload is hidden inside an HTML comment in the issue body, invisible in GitHub’s rendered Markdown view but fully parsed by the AI agent.
A victim sees only an innocent visible request and unknowingly assigns the issue to Copilot. The bug was reported via HackerOne (#3544297), initially dismissed as a “known issue” but reopened after the researcher submitted reverse-engineered source code proof from Copilot’s minified index.js. GitHub ultimately awarded a $500 bounty.

| Component | Injection Surface | Exfiltration Channel | Credentials Leaked | Bounty |
|---|---|---|---|---|
| Claude Code | PR title | PR comment | ANTHROPIC_API_KEY, GITHUB_TOKEN | $100 |
| Gemini CLI | Issue comments | Issue comment | GEMINI_API_KEY | $1,337 |
| Copilot Agent | Issue body (HTML comment) | Git commit | GITHUB_TOKEN, COPILOT_API_TOKEN, +2 more | $500 |
All three vulnerabilities share the same architectural flaw: untrusted GitHub data flows into an AI agent that holds production secrets and unrestricted tool access in the same runtime.
As researchers noted, this is the first public cross-vendor demonstration of a single prompt injection pattern defeating multiple major AI agents including one that had three dedicated runtime defenses in place.
Security experts warn the pattern extends well beyond GitHub Actions to any AI agent processing untrusted input with access to tools and secrets, including Slack bots, Jira agents, email agents, and deployment automation pipelines.
Mitigations
- Allowlist tools, never blocklist — use
--allowed-toolsto grant only the minimum required capabilities; blocklisting (e.g., blockingps) is trivially bypassed with alternatives likecat /proc/*/environ. - Least-privilege secrets — agents performing read-only tasks, like issue triage, should not hold
GITHUB_TOKENwith write scope. - Require human approval gates before agents perform outbound actions or access credentials.
- Audit all AI agent integrations in CI/CD pipelines and monitor Actions logs for anomalous credential-access patterns.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

