Low-rank Factorization for LLM Compression with Dynamic Capacity Allocation and Block-level Refinement

08 May 2026 (modified: 09 May 2026)ICML 2026 Workshop CoLoRAI SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Compression, Large language models, pruning, low-rank factorization, SVD, Transformers, efficiency
Abstract: We propose AA-SVD, a low-rank factorization-based compression framework for large language models that enables performance-preserving compression without full-model retraining. Our method finds a low-rank approximation for weight matrices of linear layers that minimizes the reconstruction error in activation space. A central principle underlying AA-SVD is that optimizing on original inputs alone ignores distribution shift from upstream compression, while conditioning on shifted inputs alone risks drifting away from the original output --- AA-SVD accounts for both. Further, we refine each transformer block locally, minimizing block-level output distortion and allowing compressed layers to jointly compensate for accumulated errors. During block-level refinement, we also learn the rank allocation across layers, concentrating capacity where it most reduces block-output distortion. We evaluate AA-SVD across LLaMA and Qwen model families on language modeling and commonsense reasoning benchmarks. AA-SVD consistently out-performs existing SVD-based compression methods across a range of compression ratios, with the advantage becoming increasingly pronounced at aggressive compression ratios.
Submission Number: 134
Loading