Evaluating Memorization in Parameter-Efficient Fine-tuning

Published: 11 Jun 2025, Last Modified: 13 Jul 2025MemFMEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Parameter-efficient Fine-tuning, Large-Language Models, Memorization, Differential Privacy
TL;DR: Our evaluation shows that parameter-efficient fine-tuning are more private than standard fine-tuning and works well with differential privacy.
Abstract: We study the impact of an emerging fine-tuning paradigm, parameter-efficient fine-tuning (PEFT), on privacy. We use an off-the-shelf data extraction attack as a vehicle to comprehensively evaluate memorization on three language models fine-tuned on two datasets, repeated 3–5 times with different random seeds. Our main findings are: (1) for practitioners employing PEFT to construct personalized models, the fine-tuned models have lower privacy risks while maintaining reasonable utility; (2) for developers designing new PEFT algorithms, while safer than standard fine-tuning, certain design choices in the algorithms increases memorization unexpectedly; and (3) for researchers auditing the privacy of fine-tuned models, employing weak differential privacy is sufficient to mitigate existing data extraction risks without significantly compromising model utility.
Submission Number: 8
Loading