Balancing Graph Embedding Smoothness in Self-supervised Learning via Information-Theoretic Decomposition

Published: 29 Jan 2025, Last Modified: 29 Jan 2025WWW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Graph algorithms and modeling for the Web
Keywords: Self-supervised Learning, Graph Neural Network, Oversmoothing
Abstract: In the graph domain, SSL has garnered significant attention, particularly in employing Graph Neural Networks (GNNs) with pretext tasks originally designed for other domains, such as contrastive learning and feature reconstruction. However, it remains uncertain whether these methods effectively reflect essential graph properties, such as representation similarity with its neighbors. We observe that existing methods position opposite ends of a spectrum driven by the graph embedding smoothness, with each end corresponding to outperformance on specific downstream tasks. Further insights suggest that balancing between the extremes can lead to improved performance across a wider range of downstream tasks. To find the balance respective to the graph embedding smoothness, we decompose the SSL objective into three terms, which are derived by incorporating the neighbor representation variable through the lens of information theory. A framework, \textbf{\mname{}} (\textbf{B}alancing \textbf{S}moothness in \textbf{G}raph SSL), introduces novel loss functions designed to supplement the representation quality in graph-based SSL by optimizing the derived three terms: neighbor loss, minimal loss, and divergence loss. We present a rigorous theoretical analysis of the effects of these loss functions, highlighting their significance from both the SSL and graph smoothness perspectives. Extensive experiments on multiple real-world datasets across node classification and link prediction consistently demonstrate that \mname{} achieves state-of-the-art performance, outperforming existing methods. Our implementation code is available at \url{https://anonymous.4open.science/r/BSG-2025/}.
Submission Number: 1462
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview