Zero-Shot Adaptation of Parameter-Efficient Fine-Tuning in Diffusion Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Transfer pre-trained low-rank adjustments (like LoRA) between diffusion models without retraining by projecting them into the new model's weight space. No need for original training data!
Abstract: We introduce ProLoRA, enabling zero-shot adaptation of parameter-efficient fine-tuning in text-to-image diffusion models. ProLoRA transfers pre-trained low-rank adjustments (e.g., LoRA) from a source to a target model without additional training data. This overcomes the limitations of traditional methods that require retraining when switching base models, often challenging due to data constraints. ProLoRA achieves this via projection of source adjustments into the target model's weight space, leveraging subspace and null space similarities and selectively targeting aligned layers. Evaluations on established text-to-image models demonstrate successful knowledge transfer and comparable performance without retraining.
Lay Summary: We propose ProLoRA, a methodology enabling training-free transfer of adapters between source and target generative models. A key advantage is its ability to operate without any training dataset and execute offline.
Primary Area: General Machine Learning->Transfer, Multitask and Meta-learning
Keywords: Diffusion Model, parameter efficient fine tuning, Low rank adaptation, transfer learning
Submission Number: 13217
Loading