LoRAtorio: An intrinsic approach to LoRA Skill Composition

ICLR 2026 Conference Submission179 Authors

01 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: MultiLoRA composition, diffusion
Abstract: Low-Rank Adaptation (LoRA) has become a widely adopted technique in text-to-image diffusion models, enabling the personalisation of visual concepts such as characters, styles, and objects. However, existing approaches struggle to effectively compose multiple LoRA adapters, particularly in open-ended settings where the number and nature of required skills are not known in advance. In this work, we present LoRAtorio, a novel train-free framework for multi-LoRA composition that leverages intrinsic model behaviour. Our method is motivated by two key observations: (1) LoRA adapters trained on narrow domains produce unconditioned denoised outputs that diverge from the base model, and (2) when conditioned out-of-distribution, LoRA outputs show behaviour closer to the base model than when conditioned in distribution. In the single LoRA scenario, personalisation and customisation show exceptional performance without catastrophic forgetting; the performance, however, deteriorates quickly as multiple adapters are loaded. Our method operates in the latent space by dividing it into spatial patches and computing cosine similarity between each patch’s predicted noise and that of the base model. These similarities are used to construct a spatially-aware weight matrix, which guides a weighted aggregation of LoRA outputs. To address domain drift, we further propose a modification to classifier-free guidance that incorporates the base model’s unconditional score into the composition. We extend this formulation to a dynamic module selection setting, enabling inference-time selection of relevant LoRA adapters from a large pool. LoRAtorio achieves state-of-the-art performance, showing up to a 1.3\% improvement in ClipScore and a 72.43\% win rate in GPT-4V pairwise evaluations, and generalises effectively to multiple latent diffusion models. Code will be made available.
Primary Area: generative models
Submission Number: 179
Loading