The Canary’s Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-SA 4.0
TL;DR: We design a MIA and specialized canaries to audit the privacy of synthetic text generated by LLMs.
Abstract: How much information about training samples can be leaked through synthetic data generated by Large Language Models (LLMs)? Overlooking the subtleties of information flow in synthetic data generation pipelines can lead to a false sense of privacy. In this paper, we assume an adversary has access to some synthetic data generated by a LLM. We design membership inference attacks (MIAs) that target the training data used to fine-tune the LLM that is then used to synthesize data. The significant performance of our MIA shows that synthetic data leak information about the training data. Further, we find that canaries crafted for model-based MIAs are sub-optimal for privacy auditing when only synthetic data is released. Such out-of-distribution canaries have limited influence on the model’s output when prompted to generate useful, in-distribution synthetic data, which drastically reduces their effectiveness. To tackle this problem, we leverage the mechanics of auto-regressive models to design canaries with an in-distribution prefix and a high-perplexity suffix that leave detectable traces in synthetic data. This enhances the power of data-based MIAs and provides a better assessment of the privacy risks of releasing synthetic data generated by LLMs.
Lay Summary: Large Language Models (LLMs) can generate synthetic data that closely mimics human-written content, making them an attractive alternative to sharing real data. However, the fact that data is synthetic does not guarantee that it preserves privacy. To evaluate the true privacy risks, we use privacy auditing; specifically, by designing attacks that test whether specific data points influenced the model’s outputs. We show that even when only synthetic text is released, attackers can still infer whether particular records were part of the model’s training set. Existing auditing tools, such as inserting unique “canary” phrases into the training data, prove ineffective in this context, as such atypical phrases are unlikely to appear in useful synthetic outputs. To address this, we design a new class of canaries tailored to the behavior of LLMs: they begin with typical, in-distribution prefixes and end with rare, particular suffixes that leave detectable traces in the generated data. This makes them more effective for auditing privacy risks in synthetic data. Together, we hope that the attack and optimal canary design we propose enable a thorough assessment of the privacy risks of releasing synthetic data generated by LLMs.
Link To Code: https://aka.ms/canarysecho
Primary Area: Deep Learning->Large Language Models
Keywords: privacy, language models, synthetic data
Submission Number: 10274
Loading