A new era of AI vulnerability has arrived, and it is far more dangerous than simply tricking a chatbot into saying something rude.
New research released this week demonstrates how attackers can weaponize everyday tools such as Google Calendar and Zoom to spy on users without ever prompting them to click a link.
In a groundbreaking paper titled The Promptware Kill Chain, researchers from Ben-Gurion University, Tel Aviv University, and Harvard University (including noted cryptographer Bruce Schneier) argue that the industry must stop viewing “prompt injection” as a minor nuisance. Instead, they have introduced a new classification: Promptware.
From Chatbot Tricks to Spyware
Historically, prompt injection was compared to SQL injection, a method used to compromise databases.
However, the researchers argue this analogy is dangerously outdated.
In the “Promptware” model, a malicious prompt acts exactly like malware. The paper highlights a terrifying exploit involving Google Calendar and Zoom. Here is how the attack works:
- The Bait: A hacker sends a Google Calendar invite containing a hidden, malicious prompt in the description.
- The Trigger: The victim’s AI assistant (which has permission to manage their calendar and emails) reads the invite.
- The Execution: The prompt tricks the AI into believing it has been instructed to start a Zoom meeting and stream the video feed to an external server controlled by the hacker.
Because the AI has legitimate access to these tools, it executes the command willingly, effectively turning the AI assistant into an insider threat.
The 7-Stage Kill Chain
The research team analyzed 36 prominent incidents to develop the Promptware Kill Chain.
This framework proves that AI attacks now follow the same complex lifecycle as traditional cyberwarfare:
- Initial Access: Getting the malicious prompt into the system (e.g., via a calendar invite).
- Privilege Escalation: “Jailbreaking” the AI to bypass safety filters.
- Reconnaissance: The AI looks around the user’s emails or files to gather context.
- Persistence: The prompt embeds itself in the AI’s memory so it can re-execute later.
- Command & Control: Establishing a link to the hacker.
- Lateral Movement: Spreading the malicious prompt to other users (e.g., the AI emailing the infected calendar invite to the victim’s contacts).
- Actions on Objective: The final damage, such as data theft or financial fraud.
The shift from “injection” to “Promptware” is critical. Traditional defenses focus on filtering bad words, but Promptware behaves like a virus. It can execute code, steal crypto, and spread across networks.
As LLMs are integrated deeper into our operating systems and given control over tools like cameras and microphones, the blast radius of an attack increases.
The authors conclude that we need “defense-in-depth”, security layers at every stage of the kill chain, rather than just hoping the AI refuses to read a bad prompt.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google





