Parameter-Efficient Subspace Optimization for LLM Fine-Tuning

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: parameter-efficient fine tuning, subspace minimization, intrinsic dimension, large language model, LoRA
Abstract: This paper develops a new perspective on parameter-efficient fine-tuning for LLMs, inspired by the classical theory of subspace minimization. We introduce a unifying framework, **P**arameter-**E**fficient **S**ubspace **O**ptimization (**PESO**), which not only recovers many existing methods such as LoRA but also bridges them with the principled algorithmic and theoretical foundations of subspace optimization. This connection highlights a natural ``exploration--exploitation'' view of subspace methods, guiding the design of new algorithms that achieve strong convergence performance while still preserving memory efficiency. Importantly, our framework establishes the convergence in the full-parameter space, resolving a critical gap of LoRA variants where low-rank updates lack such guarantees. We further instantiate the framework into a practical algorithm named PESO-LoRA, based on LoRA-type parameterization. Our algorithm achieves notable improvements over existing methods on standard benchmarks.
Primary Area: optimization
Submission Number: 21660
Loading