From Points to Functions: Infinite-dimensional Representations in Diffusion ModelsDownload PDF

Published: 29 Mar 2022, Last Modified: 05 May 2023ICLR 2022 DGM4HSD workshop PosterReaders: Everyone
Keywords: diffusion-based models, representation learning, score model, trajectory representation, attention
TL;DR: We leverage the trajectory based representation obtained from diffusion-based representation learning systems and analyze the kind of information encoded in various parts of the trajectory.
Abstract: Diffusion-based generative models learn to iteratively transfer unstructured noise to a complex target distribution as opposed to Generative Adversarial Networks (GANs) or the decoder of Variational Autoencoders (VAEs) which produce samples from the target distribution in a single step. Thus, in diffusion models every sample is naturally connected to a random trajectory which is a solution to a learned stochastic differential equation (SDE). Generative models are only concerned with the final state of this trajectory that delivers samples from the desired distribution. \cite{abstreiter2021diffusion} showed that these stochastic trajectories can be seen as continuous filters that wash out information along the way. Consequently, there is an intermediate time step at which the preserved information is optimal for a given downstream task. In this work, we show that a combination of information content from different time steps gives a strictly better representation for the downstream task. We introduce an attention and recurrence based modules that ``learn to mix'' information content of various time-steps such that the resultant representation leads to superior performance in downstream tasks.
3 Replies

Loading