Discriminator-Guided Diffusion for Generating Large Directed and Undirected Graphs

10 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph generative models, Diffusion generative models, Graph neural networks
Abstract: Synthesizing large-scale, realistic (directed) graphs is essential for modeling complex relationships, detecting anomalies, and simulating scenarios where real-world data is sparse, sensitive, or unavailable. While diffusion-based graph generators have shown promising results on small-scale graphs, such as molecular structures, existing models face three key challenges: (1) quadratic time complexity, making them impractical for large graphs; (2) a narrow focus on either structure or node and edge features; and (3) limited exploration of directed graph generation. In this work, we propose **DGDGL**: *Discriminator-Guided Diffusion for Generating Large Directed and Undirected Graphs*. Our approach unifies structure and feature generation for both nodes and edges within a single framework, supporting both directed and undirected graphs. Using graph neural networks and a novel discriminator module, DGDGL guides the denoising process through gradient-based feedback, improving the quality of generated graphs while maintaining linear time complexity with respect to the number of edges. This makes our method scalable to large graphs. We evaluate DGDGL on diverse datasets, including undirected citation networks and directed financial graphs. The results show that our method outperforms existing models with quadratic time complexity. By combining comprehensive support for both directed and undirected graphs, including feature generation for nodes and edges with efficient scalability, DGDGL shows potential for broad use in complex graph-based systems.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 3690
Loading