Keywords: Gaussian Stochastic Weight Averaging, Approximate Bayesian Inference, Natural Language Processing, Large Language Models, Low-Rank Adaptation
TL;DR: We demonstrate that a simple combination of Low-Rank Adaptation with Gaussian Stochastic Weight Averaging enhances Large Language Models’ generalization, calibration, and robustness against distribution shifts.
Abstract: Fine-tuned Large Language Models (LLMs) often suffer from overconfidence and poor calibration, particularly when fine-tuned on small datasets. To address these challenges, we propose a simple combination of Low-Rank Adaptation (LoRA) with Gaussian Stochastic
Weight Averaging (SWAG), facilitating approximate Bayesian inference in LLMs. Through extensive testing across several Natural Language Processing (NLP) benchmarks, we demonstrate that our straightforward and computationally efficient approach improves model generalization and calibration competitively with comparable, more sophisticated methods for Bayesian inference in LLMs. We further show that our method exhibits greater robustness against distribution shift, as reflected in its improved performance on out-of-distribution
tasks.
Submission Number: 24
Loading