Microsoft has disclosed a new side-channel attack that could let eavesdroppers infer chat topics even through end-to-end encryption.
Although there is no indication that the attack has been exploited in the wild, major AI chatbot providers have rolled out defences to protect users’ privacy.
OpenAI, Microsoft, Mistral and xAI have all deployed mitigations against the “Whisper Leak” attack, which exploits the pattern of encrypted packet sizes and timing during streaming responses.
Whisper Leak is based on a fundamental characteristic of streaming language models rather than an implementation flaw.
The vulnerability exploits how large language models generate responses token by token, creating distinctive digital fingerprints that machine learning classifiers can identify with high accuracy.
Because symmetric ciphers preserve the relationship between plaintext and ciphertext sizes, the pattern of response packets reveals information about the underlying content.
Microsoft researchers showed how attackers in a position to observe network traffic could identify specific conversation topics even when communications were encrypted using Transport Layer Security (TLS).
“This especially poses real-world risks to users by oppressive governments where they may be targeting topics such as protesting, banned material, election process, or journalism,” Microsoft’s Defender Security Research Team said.
Microsoft’s proof-of-concept focused on identifying conversations about money laundering.
The researchers trained binary classifiers using 100 variants of questions on the target topic and nearly 12,000 unrelated questions from a public dataset.
Results showed attack accuracy exceeding 98 percent across multiple tested models in controlled experiments.
In a simulated surveillance scenario monitoring 10,000 random conversations with just one sensitive topic, attackers achieved 100 percent precision while catching between five and 50 percent of target conversations.
This means every conversation flagged as suspicious would genuinely be about the sensitive topic, with no false alarms.
As attackers collect more training data, the threat could increase over time.
Multiple conversations from the same user or multi-turn dialogues would provide richer patterns for analysis.
To deploy Whisper Leak attacks, adversaries must be positioned to observe network traffic, such as nation-state actors at the internet service provider (ISP) layer or someone on a shared wi-fi network.
Following Microsoft’s disclosure, OpenAI has now implemented an obfuscation field in streaming responses that adds random text of variable length to each token, masking the distinctive patterns.
Microsoft Azure mirrored this approach, and the company said this doing so reduces attack effectiveness to levels so that Whisper Leak attacks no longer represent a practical risk.
Mistral.ai added a similar parameter called “p” to achieve the same effect.
The mitigations work by breaking the relationship between response content and packet patterns that made the attack possible.
Otherwise, users in high-risk situations should avoid sensitive topics when using AI chatbots on untrusted networks.
Virtual private networks (VPNs) also provide an additional protection layer by obscuring traffic from local network observers.
Microsoft has published the attack models and data collection code in a public repository for independent verification.
