EoRA: Fine-tuning-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation

Published: 02 Mar 2026, Last Modified: 12 Mar 2026ICLR 2026 Workshop ICBINBEveryoneRevisionsCC BY 4.0
Keywords: Post-training Compression, Fine-tuning-free, Efficient Deep Learning, Efficient LLM, Efficient Inference
TL;DR: EoRA offers a prompt solution for improving the accuracy of compressed models under varying user requirements without fine-tuning.
Abstract: While post-training compression techniques effectively reduce the memory footprint, latency, and power consumption of Large Language Models (LLMs), they often result in noticeable accuracy degradation and remain limited by hardware and kernel constraints that restrict supported compression formats—ultimately reducing flexibility across a wide range of deployment scenarios. In this work, we propose EoRA—a novel, fine-tuning-free method that augments compressed LLMs with low-rank matrices, allowing users to rapidly enhance task-specific performance and freely balance the trade-off between accuracy and computational overhead beyond the constraints of compression formats. EoRA consistently outperforms prior fine-tuning-free low-rank methods in recovering the accuracy of compressed LLMs, achieving notable accuracy improvements (e.g., **10.84%** on ARC-Challenge, **6.74%** on MathQA, and **11.45%** on GSM8K for LLaMA3-8B compressed to 3-bit). We also introduce an optimized CUDA kernel, accelerating inference by up to 1.4× and reducing memory overhead through quantizing EoRA. Overall, EoRA offers a prompt solution for improving the accuracy of compressed models under varying user requirements, enabling more efficient and flexible deployment of LLMs. Code is available at https://github.com/NVlabs/EoRA.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 30
Loading