Gaussian Mutual Information Maximization for Graph Self-supervised Learning: Bridging Contrastive-based to Decorrelation-based

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Graph neural networks, graph self-supervised learning, Gaussian mutual information maximization, unified theoretical framework
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We propose a graph contrastive learning objective based on Gaussian mutual information maximization and explore the relationship between decorrelation-based and contrastive-based methods.
Abstract: Enlightened by the \textit{InfoMax} principle, graph contrastive learning has achieved remarkable performance in processing large amounts of unlabeled graph data. Due to the impracticality of precisely calculating mutual information (MI), conventional contrastive methods turn to approximate its lower bound using parametric neural estimators, which inevitably introduces additional parameters and leads to increased computational complexity. Building upon a common Gaussian assumption on the distribution of node representations, we rigorously derive a computationally tractable surrogate for the original MI, termed as Gaussian Mutual Information (GMI). GMI eliminates the reliance on parameterized estimators and negative samples, resulting in an efficient contrastive objective with provable performance guarantees. Another parallel research branch on {decorrelation-based} self-supervised methods has also emerged, with the core idea of mitigating dimensional collapse by decoupling various representation channels. While the differences between the two families of contrastive-based and decorrelation-based methods have been extensively discussed to inspire new approaches, their potential connections are still obscured in the mist. By positioning the proposed GMI-based objective with cross-view identity constraint as a pivot, we bridge the gap between these two research areas from two aspects of approximate form and consistent solution, which contributes to the advancement of a unified theoretical framework for graph self-supervised learning. Extensive comparison experiments and visual analysis provide compelling evidence for the effectiveness and efficiency of our method while supporting our theoretical achievements. Besides, the empirical evidence indicates that even in cases deviating from Gaussianity, our approach continues to maintain its performance, which significantly extends application scenarios.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3023
Loading