GSINA: Improving Graph Invariant Learning via Graph Sinkhorn Attention

18 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Graph Neural Networks, Graph Invariant Learning, Graph Classification, Node Classification
Abstract: Graph invariant learning (GIL) has been extensively studied to discover the invariant relationships between graph data and labels for different graph learning tasks under various distribution shifts. Many recent endeavors of GIL focus on discovering invariant features to improve the generalization of graph learning. However, such methods often have limitations in obtaining invariant features that are expressive enough in the solution space. In this paper, we first discuss the limitations of previous works and summarize there design principles of the invariant feature extractor for GIL: 1) the sparsity, to filter out the variant features, 2) the softness, for a broader solution space, and 3) the differentiability, for a soundly end-to-end optimization. By leveraging the Optimal Transport (OT) theory, we propose Graph Sinkhorn Attention (GSINA) to meet these requirements in one shot. GSINA is a framework for GIL of multiple task levels, which infers differentiable graph invariant features with controllable sparsity and softness. Experiments on both synthetic and real-world datasets validate the superiority of our GSINA, which outperforms the state-of-the-art GIL methods (GSAT, CIGA, EERM) by large margins on graph-level tasks and node-level tasks. The PyTorch source code is provided in supplementary materials and will be publicly available on GitHub.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1236
Loading