Rethinking Invariant Graph Representation Learning without Environment Partitions

Published: 10 Mar 2023, Last Modified: 28 Apr 2023ICLR 2023 Workshop DG PosterEveryoneRevisions
Keywords: Graph Representation Learning, Out-of-Distribution Generalization, Causality
TL;DR: We found impossibility results of learning invariant graph representations without environment partitions, established the minimal assumptions for the feasibility of the problem and propose a novel solution.
Abstract: Out-of-distribution generalization on graphs requires graph neural networks to identify the invariance among data from different environments. As the environment partitions on graphs are usually expensive to obtain, augmenting the environment information has become the de facto approach. However, the usefulness of the augmented environment information has never been verified. In this work, we found that it is fundamentally impossible to learn invariant graph representations by augmenting environment information without additional assumptions. Therefore, we develop a set of minimal assumptions, including variation sufficiency and variation consistency, for feasible invariant graph learning. Based on the assumptions, we propose Graph invAriant Learning Assistant (GALA), which adopts an additional assistant model that is prone to distribution shifts, to generate proxy predictions about the environments. We show that maximizing intra-class information guided by the proxy predictions provably identifies the graph invariance given the minimal assumptions. We demonstrate the usefulness of GALA with extensive experiments on 11 datasets containing various graph distribution shifts.
Submission Number: 9
Loading