Auditing Prompt Caching in Language Model APIs

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We develop and conduct statistical audits to detect prompt caching and cache sharing in real-world language model APIs, as prompt caching may cause privacy leakage.
Abstract: Prompt caching in large language models (LLMs) results in data-dependent timing variations: cached prompts are processed faster than non-cached prompts. These timing differences introduce the risk of side-channel timing attacks. For example, if the cache is shared across users, an attacker could identify cached prompts from fast API response times to learn information about other users' prompts. Because prompt caching may cause privacy leakage, transparency around the caching policies of API providers is important. To this end, we develop and conduct statistical audits to detect prompt caching in real-world LLM API providers. We detect global cache sharing across users in seven API providers, including OpenAI, resulting in potential privacy leakage about users' prompts. Timing variations due to prompt caching can also result in leakage of information about model architecture. Namely, we find evidence that OpenAI's embedding model is a decoder-only Transformer, which was previously not publicly known.
Lay Summary: When you send a prompt to a large language model (LLM) like ChatGPT, it may temporarily store your prompt so it can respond faster the next time a matching prompt is sent. We discovered that this method, called “prompt caching”, can accidentally reveal your private messages to other users. If a user sends a prompt to the LLM and receives an unusually fast response time, this means that the prompt is currently saved in the server’s memory. Therefore, the user can conclude that someone else recently sent that prompt. Since this is a violation of user privacy, we wanted to audit real-world LLMs to determine if they are performing prompt caching. We used careful statistical testing to develop and conduct our audits. We detected privacy risks from prompt caching in seven LLM companies, including OpenAI, the creator of ChatGPT. Before publicly releasing our findings, we contacted these companies and worked with them to fix these vulnerabilities. Our research helps improve privacy, security, and transparency in LLMs, making them safer and more trustworthy for the millions of people who use these AI tools.
Link To Code: https://github.com/chenchenygu/auditing-prompt-caching
Primary Area: Social Aspects->Accountability, Transparency, and Interpretability
Keywords: audit, prompt caching, privacy, transparency, large language models
Flagged For Ethics Review: true
Submission Number: 13956
Loading