Geometric Generative Modeling with Noise-Conditioned Graph Networks

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Noise-Conditioned Graph Networks is a new class of graph neural networks for more efficient and expressive generative modeling.
Abstract: Generative modeling of graphs with spatial structure is essential across many applications from computer graphics to spatial genomics. Recent flow-based generative models have achieved impressive results by gradually adding and then learning to remove noise from these graphs. Existing models, however, use graph neural network architectures that are independent of the noise level, limiting their expressiveness. To address this issue, we introduce *Noise-Conditioned Graph Networks* (NCGNs), a class of graph neural networks that dynamically modify their architecture according to the noise level during generation. Our theoretical and empirical analysis reveals that as noise increases, (1) graphs require information from increasingly distant neighbors and (2) graphs can be effectively represented at lower resolutions. Based on these insights, we develop Dynamic Message Passing (DMP), a specific instantiation of NCGNs that adapts both the range and resolution of message passing to the noise level. DMP consistently outperforms noise independent architectures on a variety of domains including $3$D point clouds, spatiotemporal transcriptomics, and images.
Lay Summary: Modern AI generates realistic geometric objects through a process that starts with pure noise (similar to static on a TV screen) and gradually learns to reverse this noise back into recognizable objects such as cars, furniture, or human organs. However, current AI systems use the same rigid neural network throughout this entire noise-removal process, like using the same tool to both create the scaffolding of a building and paint fine details. This one-size-fits-all approach limits the quality of generated objects because different stages of noise removal require different processing strategies. We developed *Noise-Conditioned Graph Networks* that work like a smart contractor who uses different tools for different stages of construction—heavy machinery for rough work and fine brushes for detailed finishing. Our AI dynamically changes how it processes information during the noise-removal process: at early, very noisy stages, it looks broadly at distant relationships and works with simplified representations, while at later, nearly clean stages, it focuses locally on fine details with full resolution. This adaptive approach is backed by mathematical theory showing that optimal information gathering requires different strategies at different noise levels. Our method improves 3D shape generation, biological data modeling, and image generation. The technique requires minimal code changes to implement, making it immediately practical for existing AI systems. This work could accelerate advances in computer graphics, medical imaging, drug discovery, and any field requiring geometric modeling, bringing us closer to AI that truly understands and creates the geometric world around us.
Link To Code: https://github.com/peterpaohuang/ncgn
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: graph neural networks, generative models, diffusion models, flow-matching
Submission Number: 4162
Loading