SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors

Published: 18 Jun 2024, Last Modified: 06 Jul 2024WANT@ICML 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Parameter Efficient Fine-Tuning, Large Language Models
Abstract: Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights \(\mathbf{W}\) and inject learnable matrices \(\mathbf{\Delta W}\). These \(\mathbf{\Delta W}\) matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically show a performance gap compared to full fine-tuning. Although recent PEFT methods have narrowed this gap, they do so at the cost of additional learnable parameters. We propose SVFT, a simple approach that fundamentally differs from existing methods: the structure imposed on \(\mathbf{\Delta W}\) depends on the specific weight matrix \(\mathbf{W}\). Specifically, SVFT updates \(\mathbf{W}\) as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations. This approach allows fine-grained control over expressivity through the number of coefficients. Extensive experiments on language and vision benchmarks show that SVFT recovers up to \textbf{96\%} of full fine-tuning performance while training only \textbf{0.006 to 0.25\%} of parameters, outperforming existing methods that only recover up to \textbf{85\%} performance using \textbf{0.03 to 0.8\%} of the trainable parameter budget.
Submission Number: 9
Loading