InfoIGL: Invariant Graph Learning Driven by Information Theory

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Graph OOD, Invariant Learning, Contrastive Learning
Abstract: Graph-based tasks often violate the i.i.d. assumption as data collection scenarios change, having attracted significant attention to graph out-of-distribution (OOD) generalization. While extracting invariant features is a popular solution, existing methods are limited by the complexity of graphs with distribution shifts in both attributes and structures. Moreover, identifying invariance on graphs is challenging due to lack of prior knowledge for the invariant features. To address these problems, we propose a novel framework, InfoIGL, which leverages information theory to extract invariant graph representations. The framework treats mutual information as the invariance of graphs by exploiting rich *semantic* relations among different distributions. Specifically, InfoIGL decomposes the process of extracting invariant features for graphs into two tasks: **reducing redundant information** and **maximizing mutual information**. To reduce redundancy, InfoIGL leverages attention mechanism to reduce the entropy of graph representations through optimizing their probability distribution. Then InfoIGL integrates semantic-wise and instance-wise contrastive learning to maximize mutual information through joint optimization. Additionally, instance constraint and hard negative mining are utilized to avoid the collapse of contrastive learning. Experiments on both synthetic and real-world datasets demonstrate that our method achieves state-of-the-art performance under OOD generalization for graph classification tasks. The source code is available at https://anonymous.4open.science/r/InfoIGL-268D.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4439
Loading