Malicious Support Tickets Let Hackers Exploit Atlassian’s Model Context Protocol

Malicious Support Tickets Let Hackers Exploit Atlassian’s Model Context Protocol

A new class of cyberattack is targeting organizations leveraging Atlassian’s Model Context Protocol (MCP), exposing a critical weakness in the boundary between external and internal users.

Researchers have demonstrated that malicious support tickets can be weaponized to exploit AI-powered workflows in Atlassian’s Jira Service Management (JSM), enabling attackers to gain privileged access and exfiltrate sensitive data—all without ever directly breaching internal systems.

How the Attack Works

Traditionally, organizations separate external users—who submit tickets or requests—from internal users, who resolve them with elevated permissions.

– Advertisement –

However, Atlassian’s MCP, a protocol designed to embed AI into enterprise workflows, blurs this line. When an internal user (such as a support engineer) invokes an AI action—like ticket summarization—through MCP, the action runs with their internal privileges.

If the ticket contains a malicious payload, the AI unwittingly executes harmful instructions, acting as a proxy for the attacker.

prompt Injection
prompt Injection

The attack chain unfolds as follows:

  • A threat actor submits a specially crafted support ticket via JSM.
  • An internal user triggers an MCP-connected AI action (e.g., using Claude Sonnet) to process the ticket.
  • The AI executes the prompt injection payload with internal permissions.
  • Sensitive data is exfiltrated or altered, often by writing it back into the support ticket, where the attacker can retrieve it.
  • With no prompt isolation or input validation, the attacker leverages the internal user’s privileges without direct access to the MCP or backend systems.

Proof-of-Concept: The “Living off AI” Attack

In a recent proof-of-concept (PoC) attack, researchers showed how an external actor could use a malicious ticket to trigger an MCP action, causing the AI to leak internal tenant data or perform unauthorized actions.

Notably, the attacker never interacts with the MCP directly—the internal support engineer unknowingly executes the malicious instructions. 

This technique, dubbed “Living off AI,” highlights the risk in any environment where AI processes untrusted input without prompt isolation or context control.

The risk extends beyond direct support tickets. In another scenario, if a partner’s account is compromised, an attacker could submit enhancement requests containing MCP prompts that silently add comments or malicious links to multiple Jira issues.

Internal users clicking these links could trigger malware downloads, credential theft, and lateral movement within the organization—all orchestrated by the AI,C not the attacker directly.

Experts stress that this issue is not limited to Atlassian; it’s a systemic pattern wherever AI tools interact with untrusted external input. To mitigate such risks, organizations are urged to:

  • Enforce least privilege on AI-driven actions.
  • Detect suspicious prompt usage in real time.
  • Maintain audit logs of MCP activity.
  • Implement sandboxing and input validation for AI actions.

As AI becomes deeply embedded in business workflows, unchecked integration with external-facing systems introduces critical vulnerabilities.

The “Living off AI” attack demonstrates the urgent need for robust security controls, prompt isolation, and vigilant governance of AI-powered enterprise tools.

Find this News Interesting! Follow us on Google News, LinkedIn, and X to Get Instant Updates


Source link