One-Step Diffusion Distillation via Deep Equilibrium Models

Published: 23 Jun 2023, Last Modified: 15 Jul 2023DeployableGenerativeAIEveryoneRevisions
Keywords: Deep Equilibrium Models, Diffusion Models, Distillation, Generative Models
TL;DR: One-Step Diffusion Distillation via Deep Equilibrium Models
Abstract: Diffusion models excel at producing high-quality samples but naively require hundreds of iterations, prompting multiple attempts to distill this process into a faster network. Existing approaches, however, often require complex multi-stage distillation and perform sub-optimally in single-step image generation. In response, we introduce a simple yet effective means of diffusion distillation---*directly* mapping initial noise to the resulting image. Of particular importance to our approach is to leverage a new Deep Equilibrium (DEQ) model for distillation: the Generative Equilibrium Transformer (GET). Our method enables fully offline training with just noise/image pairs from the diffusion model while achieving superior performance compared to existing one-step methods on comparable training budgets. The DEQ architecture proves crucial, as GET matches a $5\times$ larger ViT in terms of FID scores while striking a critical balance of computational cost and image quality. Code, checkpoints, and datasets will be released.
Submission Number: 50
Loading