BK-SDM: Architecturally Compressed Stable Diffusion for Efficient Text-to-Image Generation

Published: 20 Jun 2023, Last Modified: 16 Jul 2023ES-FoMO 2023 PosterEveryoneRevisionsBibTeX
Keywords: Stable Diffusion, Block Removal, Knowledge Distillation, Network Compression, Text-to-Image Generation
TL;DR: This study uncovers the potential of architectural compression for text-to-image diffusion models, with block removal and distillation pretraining.
Abstract: Exceptional text-to-image (T2I) generation results of Stable Diffusion models (SDMs) come with substantial computational demands. To resolve this issue, recent research on efficient SDMs has prioritized enabling fewer sampling steps and utilizing network quantization. Orthogonal to these directions, this study highlights the power of classical architectural compression for general-purpose T2I synthesis by introducing block-removed knowledge-distilled SDMs (BK-SDMs). We eliminate several residual and attention blocks from the U-Net of SDMs, obtaining over a 30% reduction in the number of parameters, MACs per sampling step, and latency. We conduct distillation-based pretraining with only 0.22M LAION pairs (fewer than 0.1% of the full training pairs) on a single A100 GPU. Despite being trained with limited resources, our compact models can imitate the original SDM by benefiting from transferred knowledge and achieve competitive results against larger multi-billion parameter models on the zero-shot MS-COCO benchmark. Moreover, we show the applicability of our lightweight pretrained models in personalized generation with DreamBooth finetuning.
Submission Number: 15
Loading