Learning Invariant Graph Representations via Virtual Environment Inference

19 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: graph out-of-distributed generalization;graph invariant learning
TL;DR: In this work, from a class-wise perspective, we propose a novel graph invariant learning framework for graph OOD generalization.
Abstract: Graph invariant learning aims to learn invariant graph representations across different environments, which achieves great success in tackling Out-of-Distribution (OOD) generalization in graph-related tasks. As environments on graphs are usually expensive to obtain, most graph invariant learning methods heavily rely on inferring the underlying environments to learn the environment-wise invariant graph representations. Actually, inferring the underlying environments is extremely challenging, due to the high heterogeneity of the graph environments and the unknown number of underlying environments. In this paper, we solve the OOD graph generalization task from a class-wise perspective, enabling us to generate more reliable virtual environments for effective graph invariant learning. This is motivated by the observation that class-wise spurious features are more likely shared by different classes despite high environment heterogeneity. To this end, we introduce a novel framework, named Class-wise invariant risk minimization via Virtual Environment Inference (C-VEI), which aims to discard class-wise spurious correlations and preserve class-wise invariance. Specifically, to infer the class-wise virtual environments, C-VEI introduces a contrastive strategy on the latent space, which i) pulls samples from the same class but dissimilar graph representations together and ii) pushes samples from different classes but similar graph representations away. In addition, we design a class-wise invariant risk minimization to preserve class-wise invariance, We conduct extensive experiments on several graph OOD benchmarks and demonstrate the consistent superiority of our C-VEI across all settings and metrics. The source code will be made publicly available.
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1575
Loading