Microsoft finds security flaw in AI chatbots that could expose conversation topics
Your conversations with AI assistants such as ChatGPT and Google Gemini may not be as private as you think they are. Microsoft has revealed a serious flaw in the large language models (LLMs) that power these AI services, potentially exposing the topic of your conversations with them. Researchers dubbed the vulnerability “Whisper Leak” and found it affects nearly all the models they tested.
When you chat with AI assistants built into major search engines or apps, the information is protected by TLS (Transport Layer Security), the same encryption used for online banking. These secure connections stop would-be eavesdroppers from reading the words you type. However, Microsoft discovered that the metadata (how your messages are traveling across the internet) remains visible. Whisper Leak doesn’t break encryption, but it takes advantage of what encryption cannot hide.
Testing LLMs
In research published on the arXiv preprint server, Microsoft researchers explain how they tested 28 LLMs to look for this vulnerability. First, they created two sets of questions. One was a collection of many ways to ask about a single, sensitive topic, like money laundering, and the other set was filled with thousands of random, everyday queries. Then they secretly recorded each network’s data rhythm. This is the packet size (chunks of data being sent) and timing (the delay between one packet being sent and the other arriving).
Next, they trained an AI program to distinguish sensitive target topics from everyday queries solely based on the data rhythm. If the AI could successfully identify the sensitive topics without reading the encrypted text, it would confirm a privacy problem.
In most models, AI correctly guessed the topic of conversation with over 98% accuracy. The attack could also identify sensitive conversations 100% of the time, even when they occurred in only 1 out of every 10,000 conversations. The researchers tested three different ways to defend against the attacks, but none stopped them completely.
Stopping the leak
According to the team, the problem isn’t with encryption itself, but how responses are transmitted. “This is not a cryptographic vulnerability in TLS itself, but rather exploitation of metadata that TLS inherently reveals about encrypted traffic structure and timing.”
Given the severity of the leak and the ease with which the attack can be executed, the researchers state clearly in their paper that the industry must secure future systems. “Our findings underscore the need for LLM providers to address metadata leakage as AI systems handle increasingly sensitive information.”
Written for you by our author Paul Arnold, edited by Gaby Clark, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive.
If this reporting matters to you,
please consider a donation (especially monthly).
You’ll get an ad-free account as a thank-you.
More information:
Geoff McDonald et al, Whisper Leak: a side-channel attack on Large Language Models, arXiv (2025). DOI: 10.48550/arxiv.2511.03675
Microsoft blog: www.microsoft.com/en-us/securi … ote-language-models/
arXiv
© 2025 Science X Network
Citation:
Microsoft finds security flaw in AI chatbots that could expose conversation topics (2025, November 10)
retrieved 11 November 2025
from https://techxplore.com/news/2025-11-microsoft-flaw-ai-chatbots-expose.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
Comments are closed