Keywords: Machine Learning, Laplace Approximation, Low Rank Adaptation
TL;DR: We propose a method to mitigate source domain forgetting in Low Rank Adaptation using Laplace Approximation
Abstract: Parameter-efficient finetuning (PEFT) enables quick adaptation of large pre-trained language models to different downstream applications. However, this process often leads to catastrophic forgetting of the model’s original domain knowledge. We address this issue with LALoRA, a weight-space regularization method that applies a Laplace Approximation to Low-Rank Adaptation. We estimate how confident the model is in each parameter and constrain updates in high-confidence directions. This preserves original knowledge while still allowing efficient target domain learning. We showcase the improved learning-forgetting trade-off compared to existing baseline methods and discuss different approximations of the loss landscape curvature, through which we estimate the parameters' uncertainty.
Submission Number: 39
Loading