Keywords: Generative Models, Efficient Machine Learning
Abstract: Diffusion models have revolutionized generative tasks, especially in the domain of text-to-image synthesis; however, their iterative denoising process demands substantial computational resources. In this paper, we present a novel acceleration strategy that integrates token-level pruning with caching techniques to tackle this computational challenge.
By employing noise relative magnitude, we identify significant token changes across denoising iterations.
Additionally, we enhance token selection by incorporating spatial clustering and ensuring distributional balance. Our experiments demonstrate reveal a 50\%-60\% reduction in computational costs while preserving the performance of the model, thereby markedly increasing the efficiency of diffusion models.
Submission Number: 99
Loading