Parameter-Efficient Fine-Tuning of LLMs with Mixture of Space Experts

ICLR 2026 Conference Submission24927 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Non-Euclidean Space, Parameter-Efficient Fine-tuning, Mixture of Experts
Abstract: Large language models (LLMs) have achieved remarkable progress, with Parameter-Efficient Fine-Tuning (PEFT) emerging as a key technique for downstream task adaptation. However, existing PEFT methods mainly operate in Euclidean space, fundamentally limiting their capacity to capture complex geometric structures inherent in data. While alternative geometric spaces, such as hyperbolic geometries for hierarchical data and spherical manifolds for circular patterns, offer theoretical advantages, constraining representations to single manifold types fundamentally limits expressiveness, even with learnable curvature parameters. To address this, we propose \textbf{MoS} (Mixture of Space), a unified framework that leverages multiple geometric spaces simultaneously to learn richer, curvature-aware representations. Building on this scheme, we develop \textbf{MoSELoRA}, which extends Low-Rank Adaptation (LoRA) with heterogeneous geometric experts, enabling models to adaptively select or combine appropriate geometric spaces based on input. Besides, to address the computational overhead of frequent manifold mapping, we develop a lightweight projection mechanism. Moreover, We provide empirical insights into how curvature optimization impacts training stability and model performance. Our experiments across diverse benchmarks demonstrate that MoSELoRA consistently outperforms strong baselines.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 24927
Loading