Contrastive Grouping-based Invariant Learning for Generalizable Graph Learning

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: grouping-based; graph invariant learning; out-of-distribution generalization
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Recently, Graph Neural Networks (GNNs) have demonstrated remarkable success in various graph learning tasks. However, most existing GNNs fail to generalize under distribution shifts, namely testing and training graphs come from different distributions. Graph invariant learning is proposed to tackle the out-of-distribution (OOD) generalization problem by capturing the invariant relationships between graph features and labels. To this end, most graph invariant learning methods estimate the probabilities of nodes or edges belonging to the invariant subgraphs by measuring these edges' or nodes' contribution degrees of the corresponding edges or nodes to the model's predictive performance. Nonetheless, relying solely on the predictive performance of the model is insufficient to determine whether the given edge or node belongs to an invariant subgraph. To solve this problem, we propose a novel Contrastive Grouping-based Invariant Learning(CGIL) algorithm for OOD generalization on graphs. Our algorithm incorporates the idea of node grouping into the design of learning invariant features. Unlike existing methods that simply employ a mask generator to learn node weights, CGIL tries to cluster graph nodes into an invariant group and several contrast groups. Then CGIL takes the graph connectivity information into account to enforce the graph connectivity inside the invariant group. A contrastive loss constraint is adopted to promote the grouping and invariant subgraph generating procedure. Compared with nine state-of-the-art generalization methods, extensive experiments on four benchmark datasets demonstrate the effectiveness of our proposed CGIL algorithm for the graph classification tasks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5311
Loading