Revisiting Privacy, Utility, and Efficiency Trade-offs when Fine-Tuning Large Language Models

14 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Privacy, Efficiency, Fine-tuning, Large Language Models
TL;DR: This work studies the tradeoffs among privacy, efficiency, and utility while fine-tuning LLMs, contradicting the conventional wisdom that privacy comes at the cost of efficiency.
Abstract: We study the inherent trade-offs in minimizing privacy risks and maximizing utility, while maintaining high computational efficiency, when fine-tuning large language models (LLMs). A number of recent works in privacy research have attempted to mitigate privacy risks posed by memorizing fine-tuning data by using differentially private training methods (e.g., DP-SGD), albeit at a significantly higher computational cost (inefficiency). In parallel, several works in systems research have focused on developing (parameter) efficient fine-tuning methods (e.g., LoRA). However, few works, if any, investigated whether such efficient methods, in isolation, enhance or diminish privacy risks. In this paper, we investigate this gap and arrive at a surprising conclusion: efficient fine-tuning methods like LoRA mitigate privacy-risks similar to private fine-tuning methods like DP-SGD. Our empirical finding contradicts the prevailing wisdom that privacy and efficiency objectives are at odds during fine-tuning. Our finding is established by (a) carefully defining measures of privacy and utility that distinguish between recollecting sensitive and non-sensitive tokens in training and test datasets used in fine-tuning and (b) extensive evaluations using multiple open-source language models from Pythia, Gemma, Llama, and Qwen families and different domain-specific datasets.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 5252
Loading