Learn Beneficial Noise as Graph Augmentation

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Although graph contrastive learning (GCL) has been widely investigated, it is still a challenge to generate effective and stable graph augmentations. Existing methods often apply heuristic augmentation like random edge dropping, which may disrupt important graph structures and result in unstable GCL performance. In this paper, we propose **P**ositive-**i**ncentive **N**oise driven **G**raph **D**ata **A**ugmentation (PiNGDA), where positive-incentive noise (pi-noise) scientifically analyzes the beneficial effect of noise under the information theory. To bridge the standard GCL and pi-noise framework, we design a Gaussian auxiliary variable to convert the loss function to information entropy. We prove that the standard GCL with pre-defined augmentations is equivalent to estimate the beneficial noise via the point estimation. Following our analysis, PiNGDA is derived from learning the beneficial noise on both topology and attributes through a trainable noise generator for graph augmentations, instead of the simple estimation. Since the generator learns how to produce beneficial perturbations on graph topology and node attributes, PiNGDA is more reliable compared with the existing methods. Extensive experimental results validate the effectiveness and stability of PiNGDA.
Lay Summary: Graphs are used to model complex systems like social networks or molecules. In graph contrastive learning, we train models to distinguish between different views of the same graph by applying data augmentation. However, existing methods often use random changes which can accidentally remove important information and harm performance. We propose PiNGDA, a new approach that replaces random edits with positive-incentive noise: learned, helpful changes based on principles from information theory. PiNGDA uses a trainable generator to add meaningful perturbations to both the graph's structure and node features. This guided augmentation helps the model learn more general and robust representations. Experiments show that PiNGDA leads to more stable and effective learning than traditional GCL methods.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Beneficial Noise; Graph Augmentation
Submission Number: 4235
Loading