ABBA: Highly Expressive Hadamard Product Adaptation for Large Language Models

Published: 11 Jun 2025, Last Modified: 10 Jul 2025ES-FoMo III SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LoRA, Low-rank adaptation, PEFT, Parameter-Efficient Fine-Tuning
TL;DR: We introduce ABBA, a PEFT method that enhances expressivity by decoupling low-rank updates from pre-trained weights via a Hadamard product, consistently improving over SOTA methods.
Abstract: Large Language Models (LLMs) demonstrate strong performance across a variety of tasks, yet adapting them efficiently to new domains remains a challenge. Parameter-Efficient Fine-Tuning (PEFT) mitigates this by introducing lightweight, trainable modules while keeping most pre-trained weights frozen. We introduce **ABBA**, a new PEFT approach that models updates as a Hadamard product of two independently learnable low-rank matrices, fully decoupled from the pre-trained weights. This reparameterization significantly enhances expressivity under fixed parameter budgets. We provide a formal analysis of ABBA’s expressive capacity and demonstrate that it consistently outperforms existing PEFT methods on arithmetic and commonsense reasoning benchmarks across multiple models by a significant margin. Our code is available at: https://github.com/CERT-Lab/abba.
Submission Number: 51
Loading