A Fully Quantized Training Accelerator for Diffusion Network With Tensor Type & Noise Strength Aware Precision Scheduling

Published: 01 Jan 2024, Last Modified: 16 May 2025IEEE Trans. Circuits Syst. II Express Briefs 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Fine-grained mixed-precision fully-quantized methods have great potential to accelerate neural network training, but existing methods exhibit large accuracy loss for more complex models such as diffusion networks. This brief introduces a fully-quantized training accelerator for diffusion networks. It features a novel training framework with tensor-type- and noise-strength-aware precision scheduling to optimize bit-width allocation. The processing cluster design enables dynamical switching bit-width mappings for model weights, allows concurrent processing in 4 different bit-widths, and incorporates a gradient square sum collection unit to minimize on-chip memory access. Experimental results show up to 2.4 $\times $ training speedup and 81% operation bit-width overhead reduction compared to existing designs, with minimal impact on image generation quality.
Loading