Keywords: Low Rank, Adapters, Symmetric, Low-Rank Adapters, LoRA, SymLora, Symmetric LoRA
TL;DR: Low Rank Adapters but with symmetric update matrices.
Abstract: In this paper, we introduce Symmetric Low-Rank Adapters, an optimized variant of LoRA with even fewer weights. This method utilizes Low-Rank Symmetric Weight Matrices to learn downstream tasks more efficiently. Traditional LoRA accumulates fine-tuning weights with the original pre-trained weights via a Singular Value Decomposition (SVD) like approach, i.e., model weights are fine-tuned via updates of the form $BA$ (where $B \in {R}^{n\times r}$, $A \in {R}^{r\times n}$, and $r$ is the rank of the merged weight matrix). In contrast, our approach, named SymLoRA, represents fine-tuning weights as a Spectral Decomposition, i.e., $Q \, diag(\Lambda)\, Q^T$, where $Q \in {R}^{n\times r}$ and $\Lambda \in {R}^r$. SymLoRA requires approximately half of the finetuning weights. Here, we show that this approach has negligible losses in downstream efficacy.
Submission Number: 15
Loading