Keywords: Knowledge Distillation, One-Step Generation, Diffusion Model Acceleration, Low-rank Rotation, Text-to-Image Generation
TL;DR: We propose a directional knowledge distillation (DKD) framework that enables an effective one-step diffusion model.
Abstract: Despite the impressive performance of diffusion models such as Stable Diffusion (SD) in image generation, their slow inference limits practical deployment. Recent works accelerate inference by distilling multi-step diffusion into one-step generators. To better understand the distillation mechanism, we analyze U-Net/DiT weight changes between one-step students and their multi-step teacher counterparts. Our analysis reveals that changes in weight direction significantly exceed those in weight norm, highlighting it as the key factor during distillation. Motivated by this insight, we propose the **Lo**w-rank **R**ot**a**tion of weight **D**irection (LoRaD). LoRaD is designed to model these structured directional changes using learnable low-rank rotation matrices. We further integrate LoRaD into Variational Score Distillation (VSD), resulting in Directional Knowledge Distillation (DKD)—a novel one-step distillation framework. DKD achieves state-of-the-art FID scores on COCO 2014 and COCO 2017 while using only approximately 10\% of the trainable parameters of the U-Net. Furthermore, the distilled one-step model demonstrates strong versatility and scalability, generalizing well to various downstream tasks such as controllable generation, relation inversion, and high-resolution synthesis.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 3949
Loading