On the Simplicity-Similarity Tradeoff of LoRA and Full Fine-Tuning

Published: 02 Mar 2026, Last Modified: 02 Mar 2026Sci4DL 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fine-tuning, LoRA, Simplicity Bias
Abstract: Fine-tuning is the dominant paradigm for adapting pre-trained models to downstream tasks. However, mounting evidence suggests that parameter-efficient methods, such as Low-Rank Adaptation (LoRA), converge to distinct solutions compared to Full Fine-Tuning (FFT). In this work, we investigate the underlying optimization biases driving this divergence. We demonstrate a clear shift in their learning dynamics: FFT exhibits a strong simplicity bias, regardless of the downstream tasks. LoRA, meanwhile, consistently prioritizes features already prevalent in the pre-training distribution, a phenomenon which we term similarity bias. Our findings provide a feature-level explanation for observed differences between LoRA and FFT, offering potential critical insights into how adaptation strategies influence model robustness and task-specific generalization.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Style Files: I have used the style files.
Submission Number: 84
Loading