Abstract: Federated learning (FL) is a popular paradigm for collaborative training which avoids direct data exposure between clients. However, data privacy issues still remain: FL-trained large language models are capable of memorizing and completing phrases and sentences contained in training data when given their prefixes. Thus, it is possible for adversarial and honest-but-curious clients to recover training data of other participants simply through targeted prompting. In this work, we demonstrate that a popular and simple fine-tuning strategy, low-rank adaptation (LoRA), reduces memorization during FL by a factor of up to 10 without significant performance cost. We study this effect by performing fine-tuning tasks in high-risk domains such as medicine, law, and finance. We observe a reduction in memorization for a wide variety of model families, from 1B to 70B parameters. We find that LoRA can reduce memorization in centralized learning as well, and we compare how the memorization patterns differ. Furthermore, we study the effect of hyperparameters and show that LoRA can be combined with other privacy-preserving techniques such as gradient clipping and Gaussian noise, secure aggregation, and Goldfish loss to further improve record-level privacy while maintaining performance.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: * Revised Introduction and Motivation Section to better motivate and situate our paper’s contributions in the existing literature.
* Updated Methodology section to motivate our 3-client cross-silo FL setup.
* Revised Results Section bringing two figures from the Appendix in the main body. We consistently compare centralized learning and federated learning and discuss the privacy-utility tradeoff directly in the main results. Update Section 4.1 and 4.2 to address reviewers comments.
* New Potential Theoretical Explanations Section (Section 5) in the main body with key theoretical points from Appendix L
* Additional discussion of limitations in the Conclusion, addressing the need for further work for cross-device settings and for a better theoretical understanding.
Assigned Action Editor: ~Pin-Yu_Chen1
Submission Number: 6281
Loading