Exploring the Design Space of Generative Diffusion Processes for Sparse GraphsDownload PDF

Published: 29 Nov 2022, Last Modified: 05 May 2023SBM 2022 PosterReaders: Everyone
Keywords: sparse graphs, jump diffusion, score function, denoising score matching
TL;DR: We consider how to efficiently generate large sparse graphs using score-based methods generalized to jump diffusion processes.
Abstract: We extend score-based generative diffusion processes (GDPs) to sparse graphs and other inherently discrete data, with a focus on scalability. GDPs apply diffusion to training samples, then learn a reverse process generating new samples out of noise. Previous work applying GDPs to discrete data effectively relax discrete variables to continuous ones. Our approach is different: we consider jump diffusion (i.e., diffusion with punctual discontinuities) in $\mathbb{R}^d \times \mathcal{G}$ where $\mathcal{G}$ models discrete components of the data. We focus our attention on sparse graphs: our \textsc{Dissolve} process gradually breaks apart a graph $(V,E) \in \mathcal{G}$ in a certain number of distinct jump events. This confers significant advantages compared to GDPs that use less efficient representations and/or that destroy the graph information in a sudden manner. Gaussian kernels allow for efficient training with denoising score matching; standard GDP methods can be adapted with just an extra argument to the score function. We consider improvement opportunities for \textsc{Dissolve} and discuss necessary conditions to generalize to other kinds of inherently discrete data.
Student Paper: No
1 Reply

Loading