Keywords: meta-learning, lora, generative models, id personalization, flux
TL;DR: We introduce Meta-Low-Rank Adaptation (Meta-LoRA), a novel framework that leverages meta-learning to encode domain-specific priors into LoRA-based identity personalization.
Abstract: Personalizing text-to-image models to create subject-specific content from limited images is a critical challenge in generative AI. Current methods force a difficult choice between slow, high-fidelity fine-tuning and fast, tuning-free approaches that can struggle with identity details and often replicate the reference pose. We introduce Meta-Low-Rank Adaptation (Meta-LoRA), a novel framework that enhances LoRA-based personalization by meta-learning a domain-specific prior for human identity. Our key insight is to learn a shared, low-dimensional manifold of general identity features from multiple subjects, which provides a powerful foundation for rapidly adapting a small, identity-specific component to a new person from a single image. To enable a rigorous evaluation that addresses pose-copying biases, we introduce Meta-PHD, a diverse benchmark dataset, and R-FaceSim, a robust new similarity metric. On this benchmark, Meta-LoRA achieves a 1.67x faster convergence than standard LoRA while reaching superior identity fidelity. Our findings show that Meta-LoRA not only outperforms its direct baseline but also achieves a more effective balance between identity preservation and prompt adherence than state-of-the-art tuning-free methods. More broadly, our work demonstrates that meta-learning provides a practical and efficient pathway for adapting large generative models, bridging the gap between existing fine-tuning and conditioning-based paradigms. The code, model weights, and dataset will be released publicly upon acceptance.
Primary Area: generative models
Submission Number: 18444
Loading