Accelerating Diffusion Transformer via Error-Optimized Cache

Published: 01 Jan 2025, Last Modified: 22 Jul 2025CoRR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Diffusion Transformer (DiT) is a crucial method for content generation. However, it needs a lot of time to sample. Many studies have attempted to use caching to reduce the time consumption of sampling. Existing caching methods accelerate generation by reusing DiT features from the previous time step and skipping calculations in the next, but they tend to locate and cache low-error modules without focusing on reducing caching-induced errors, resulting in a sharp decline in generated content quality when increasing caching intensity. To solve this problem, we propose the Error-Optimized Cache (EOC). This method introduces three key improvements: (1) Prior knowledge extraction: Extract and process the caching differences; (2) A judgment method for cache optimization: Determine whether certain caching steps need to be optimized; (3) Cache optimization: reduce caching errors. Experiments show that this algorithm significantly reduces the error accumulation caused by caching, especially excessive caching. On the ImageNet dataset, without substantially increasing the computational load, this method improves the FID of the generated images when the rule-based model FORA has a caching level of 75%, 50%, and 25%, and the training-based model Learning-to-cache has a caching level of 22%. Specifically, the FID values change from 30.454 to 21.690 (28.8%), from 6.857 to 5.821 (15.1%), from 3.870 to 3.692 (4.6%), and from 3.539 to 3.451 (2.5%) respectively.
Loading