Hierarchical Self-Supervised Graph Contrastive Learning: Capturing Multi-Scale Structural Information
Keywords: Graph Neural Networks, Self-supervised Learning, Contrastive Learning, Hierarchical Representation, Node Classification
Abstract: Graph Neural Networks (GNNs) have emerged as powerful tools for learning rep-resentations from graph-structured data Kipf & Welling (2017); Veliˇckovic´ et al.(2018), but often rely heavily on labeled data for training. This paper introduces a novel hierarchical self-supervised graph contrastive learning framework that ef-fectively leverages unlabeled data to enhance node representations. Our method captures rich structural information at multiple scales by incorporating contrastive objectives at the node, subgraph, and graph levels, extending previous work on self-supervised learning for graphs Veliˇckovic´ et al. (2019); You et al. (2020). We employ an adaptive graph augmentation strategy to generate meaningful views of the graph while preserving essential properties. Through extensive experiments on benchmark datasets, including Cora, Citeseer, PubMed Sen & Dhillon (2008), and Reddit Hamilton et al. (2017), we demonstrate that our approach consistently outperforms both supervised and self-supervised baseline models in node clas-sification tasks. Our method shows particular strength in low-label regimes and exhibits strong generalization capabilities in both transductive and inductive set-tings. Ablation studies confirm the importance of each hierarchical component, while qualitative analyses illustrate the discriminative power of the learned em-beddings. This work opens new avenues for self-supervised learning on graphs and has broad implications for applications where labeled data is scarce or ex-pensive to obtain, such as in social networks Perozzi et al. (2014) and biological networks Zitnik et al. (2017).
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13882
Loading