Keywords: diffusion models, Regularization
TL;DR: Our mathematical guarantees prove that sparsity can reduce the input dimension’s influence on the computational complexity to that of a much smaller intrinsic dimension of the data.
Abstract: Diffusion models are one of the key architectures of generative AI. Their main
drawback, however, is the computational costs. This study indicates that the
concept of sparsity, well known especially in statistics, can provide a pathway to
more efficient diffusion pipelines. Our mathematical guarantees prove that sparsity
can reduce the input dimension’s influence on the computational complexity to that
of a much smaller intrinsic dimension of the data. Our empirical findings confirm
that inducing sparsity can indeed lead to better samples at a lower cost.
Supplementary Material: zip
Primary Area: learning theory
Submission Number: 24280
Loading