User Inference Attacks on Large Language Models

Published: 28 Oct 2023, Last Modified: 11 Dec 2023FL@FM-NeurIPS’23 PosterEveryoneRevisionsBibTeX
Student Author Indication: Yes
Keywords: privacy, user data, user inference, LLM privacy
TL;DR: We study a privacy attack wherein an attacker infers whether a user's data was used for language model fine-tuning given a small set of samples from that user.
Abstract: We study the privacy implications of fine-tuning large language models (LLMs) on user-stratified (i.e. federated) data. We define a realistic threat model, called user inference, wherein an attacker infers whether or not a user's data was used for fine-tuning. We implement attacks for this threat model that require only a small set of samples from a user (possibly different from the samples used for training) and black-box access to the fine-tuned LLM. We find that LLMs are susceptible to user inference attacks across a variety of fine-tuning datasets with outlier users (i.e., those with data distributions sufficiently different from other users) and users who contribute large quantities of data being most susceptible. Finally, we find that mitigation interventions in the training algorithm, such as batch or per-example gradient clipping and early stopping fail to prevent user inference while limiting the number of fine-tuning samples from a single user can reduce attack effectiveness (albeit at the cost of reducing the total amount of fine-tuning data).
Submission Number: 40
Loading