Keywords: Differential privacy, random projection
TL;DR: LoRA is differentially private
Abstract: We study the differential privacy (DP) of low-rank adaptation (LoRA) fine-tuning. Focusing on FA-LoRA (fixed $A$, trained $B$), where a single training step is equivalent to applying a random Wishart projection to the gradients, we prove a formal $(\varepsilon, \delta)$-DP guarantee without explicit additive noise. The resulting privacy parameters depend explicitly on dataset sensitivity and the projection rank $r$. Moreover, the low-rank structure reduces memory and compute by design. To place these results in a broader context, we formalize the underlying projection operation as a general projection mechanism of which FA-LoRA is an instance. This mechanism is of independent interest as random projections are ubiquitous in machine learning.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 24421
Loading