Low-Rank Continual Personalization of Diffusion Models

Published: 05 Mar 2025, Last Modified: 14 Apr 2025SCOPE - ICLR 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Main paper track (up to 5 pages excluding references and appendix)
Keywords: continual learning, personalization, diffusion models, low-rank adaptation
TL;DR: We investigate diverse low-rank inititialization and merging methods in the tasks of continual object and style personalization with Diffusion Models.
Abstract: Recent personalization methods for diffusion models, such as Dreambooth and LoRA, allow fine-tuning pre-trained models to generate new concepts. However, applying these techniques across consecutive tasks in order to include, e.g., new objects or styles, leads to a forgetting of previous knowledge due to mutual interference between their adapters. In this work, we tackle the problem of continual customization under a rigorous regime with no access to past tasks' adapters. In such a scenario, we investigate how different adapters' initialization and merging methods can improve the quality of the final model. To that end, we evaluate the naive continual fine-tuning of customized models and compare this approach with three methods for consecutive adapters' training: sequentially merging new adapters, merging orthogonally initialized adapters, and updating only relevant task-specific weights. In our experiments, we show that the proposed techniques mitigate forgetting when compared to the naive approach. In our studies, we show different traits of selected techniques and their effect on the plasticity and stability of the continually adapted model. Repository with the code is available at \url{https://github.com/luk-st/continual-lora}.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 37
Loading