Principled Latent Diffusion for Graphs via Laplacian Autoencoders

ICLR 2026 Conference Submission8966 Authors

17 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph generation, Latent diffusion
TL;DR: We introduce a latent graph diffusion framework that enables near-lossless reconstruction, removes the quadratic bottleneck of standard graph diffusion, and achieves up to $1000\times$ faster generation while maintaining competitive performance.
Abstract: Graph diffusion models achieve state-of-the-art performance in graph generation but suffer from quadratic complexity in the number of nodes---and much of their capacity is wasted modeling the absence of edges in sparse graphs. Inspired by latent diffusion in other modalities---a natural idea is to compress graphs into a low-dimensional latent space and perform diffusion there. However, unlike images or text, graph generation demands near-lossless reconstruction, since even a single error in decoding an adjacency matrix can invalidate the entire sample. This challenge has remained largely unaddressed. We propose a latent graph diffusion framework that directly overcomes these obstacles. A permutation-equivariant autoencoder maps each node into a fixed-dimensional embedding from which the full adjacency is provably recoverable, enabling near-lossless reconstruction for both undirected graphs and DAGs. The latent representation scales linearly with the number of nodes---eliminating the quadratic bottleneck and making it feasible to train larger and more expressive models. In this latent space, we train a Diffusion Transformer with flow matching, enabling efficient and expressive graph generation. Our approach is competitive results against prior graph diffusion models, while achieving speed-up ranging from $50\times$ to $1000\times$.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 8966
Loading