FastFace: Training-Free Identity Preservation Tuning in Distilled Diffusion via Guidance and Attention

ICLR 2026 Conference Submission19252 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: diffusion models, id-preserving generation, distillation
TL;DR: training-free techniques for adaption of pretrained id-preserving adapters to distilled diffusion models
Abstract: The recent proliferation of identity-preserving (ID) adapters has significantly advanced personalized generation with diffusion models. However, these adapters are predominantly co-trained with base diffusion models, inheriting their critical drawback: slow, multi-step inference. This work addresses the challenge of adapting pre-trained ID adapters to much faster distilled diffusion models without requiring any further training. We introduce FastFace, a universal framework that achieves this via two key mechanisms: (1) the decomposition and adaptation of classifier-free guidance for few-step stylistic generation, and (2) attention manipulation within decoupled blocks to enhance identity similarity and fidelity. We demonstrate that FastFace generalizes effectively across various distilled models and maintains full compatibility with a wide range of existing ID-preserving methods, enabling high-fidelity personalized image generation at unprecedented speeds.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 19252
Loading