Laplacian-Guided Denoising Graph Diffusion for Graph Learning with an Adaptive Prior

Published: 23 Sept 2025, Last Modified: 21 Oct 2025NPGML PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph representation learning; Network graph; Diffusion models
Abstract: Graph representation learning often relies on manually engineered, task-specific inductive biases, which limit model flexibility and generalization across diverse tasks. While diffusion models have shown promising ability in capturing arbitrary distributions, they frequently lack a deep integration of graph structure. To address this, we propose the LapDiff, a novel diffusion-based framework that learns adaptive priors to dynamically align its inductive bias with the intrinsic characteristics of graph-structured data and their tasks. The novelty of LapDiff is its use of Laplacian smoothing as a structure-aware noise mechanism in the forward process, complemented by topological perturbations. This design enables the denoising network to effectively capture the underlying data-generating factors tied to a graph's unique structure and features. By capturing priors from a task and data, LapDiff mitigates the limitations of static biases and enhances task-agnostic generalization. Extensive experiments on large-scale OGB benchmarks demonstrate that LapDiff is universally effective for both link prediction and node classification, achieving state-of-the-art performance and offering a new perspective into graph representation learning.
Submission Number: 126
Loading