Simple and Critical Iterative Denoising: A Recasting of Discrete Diffusion in Graph Generation

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-SA 4.0
TL;DR: We propose a novel framework called Simple Iterative Denoising, which simplifies discrete diffusion and circumvents compounding denoising errors by removing dependencies on previous intermediate states in the noising process.
Abstract: Discrete Diffusion and Flow Matching models have significantly advanced generative modeling for discrete structures, including graphs. However, the dependencies of the noisy distributions across time of these models lead to error accumulation and propagation during the reverse denoising process—a phenomenon known as \emph{compounding denoising errors}. To address this problem, we propose a novel framework called \emph{Simple Iterative Denoising}, which simplifies discrete diffusion and circumvents the issue by removing dependencies on previous intermediate states in the noising process. Additionally, we enhance our model by incorporating a \emph{Critic}, which during generation selectively retains or corrupts elements in an instance based on their likelihood under the data distribution. Our empirical evaluations demonstrate that the proposed method significantly outperforms existing discrete diffusion baselines in graph generation tasks.
Lay Summary: This paper looks at denoising models for structured data like text or molecules—where the information is made up of distinct elements, such as words or atoms. These models work by intentionally corrupting the data—for example, hiding words in a sentence or altering parts of a molecule—and then training a model to reconstruct the original version. Once trained, the model can generate text from a fully masked sentence or suggest molecular structures starting from random atoms. However, at the beginning of this reconstruction process, the data is highly corrupted, so the model’s predictions are often weak. Previous models relied heavily on these early guesses, which led to error propagation. This paper introduces a simple approach to remove that dependency, resulting in significantly better performance.
Link To Code: https://github.com/yoboget/sid
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Graph, molecules, generation, discrete, diffusion, denoising, flow, matching, generative, compouding, errors, iterative, critic, critical
Submission Number: 7245
Loading