HiLoRA: Adaptive Hierarchical LoRA Routing for Training-Free Domain Generalization

12 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Low-Rank Adaptation;Hierarchical Routing;Domain Generation;Task-Specific Adaptation
Abstract: Low-Rank Adaptation (LoRA) has emerged as a widely used technique for adapting large language models (LLMs) to new domains, due to its modular design and broad availability on platforms such as HuggingFace. This availability has motivated efforts to reuse existing LoRAs for domain generalization. However, existing methods often rely on explicit task labels or additional training, which are impractical for deployment. Moreover, they typically activate a fixed number of entire LoRA modules, leading to parameter redundancy or insufficiency that degrade performance. In this paper, we propose HiLoRA, a training-free framework that performs adaptive hierarchical routing over LoRA pools. Drawing on structural properties of LoRA, we define rank-one components (ROCs), in which each rank parameter is regarded as an independent unit. For a given input sequence, HiLoRA first adaptively selects a subset of LoRAs and determines their ROC allocation based on Gaussian likelihoods at the sequence level. At the token level, it further refines routing by activating only the most informative ROCs. We further provide theoretical guarantees that HiLoRA selects the most relevant LoRAs with high probability. Extensive experiments show that HiLoRA achieves substantial improvements in domain generalization, with accuracy gains of up to $55\% over state-of-the-art baselines, while maintaining comparable inference throughput.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 4341
Loading