Graph Contrastive Learning with Personalized AugmentationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Graph contrastive learning (GCL) has emerged as an effective tool to learn representations for whole graphs in the absence of labels. The key idea is to maximize the agreement between two augmented views of each graph via data augmentation. Existing GCL models mainly focus on applying \textit{identical augmentation strategies} for all graphs within a given scenario. However, real-world graphs are often not monomorphic but abstractions of diverse natures. Even within the same scenario (e.g., macromolecules and online communities), different graphs might need diverse augmentations to perform effective GCL. Thus, blindly augmenting all graphs without considering their individual characteristics may undermine the performance of GCL arts. However, it is non-trivial to achieve personalized allocation since the search space for all graphs is exponential to the number of graphs. To bridge the gap, we propose the first principled framework, termed as \textit{G}raph contrastive learning with \textit{P}ersonalized \textit{A}ugmentation (GPA). It advances conventional GCL by allowing each graph to choose its own suitable augmentation operations. To cope with the huge search space, we design a tailored augmentation selector by converting the discrete space into continuous, which is a plug-and-play module and can be effectively trained with downstream GCL models end-to-end. Extensive experiments across 10 benchmark graphs from different types and domains demonstrate the superiority of GPA against state-of-the-art competitors. Moreover, by visualizing the learned augmentation distributions across different types of datasets, we show that GPA can effectively identify the most suitable augmentations for each graph based on its characteristics.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Unsupervised and Self-supervised learning
14 Replies

Loading