Enhancing Graph Invariant Learning from a Negative Inference Perspective

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: In this work, we propose a negative inference graph OOD framework (NeGo) to broaden the inference space for environment factors, effectively addressing the limitations that existing methods cannot tackle.
Abstract: The out-of-distribution (OOD) generalization challenge is a longstanding problem in graph learning. Through studying the fundamental cause of data distribution shift, i.e., the changes of environments, significant progress has been achieved in addressing this issue. However, we observe that existing works still fail to effectively address complex environment shifts. Existing practices place excessive attention on extracting causal subgraphs, inevitably treating spurious subgraphs as environment variables. While spurious subgraphs are controlled by environments, the space of environment changes encompass more than the scale of spurious subgraphs. Therefore, existing efforts have a limited inference space for environments, leading to failure under severe environment changes. To tackle this issue, we propose a negative inference graph OOD framework (NeGo) to broaden the inference space for environment factors. Inspired by the successful practice of prompt learning in capturing underlying semantics and causal associations in large language models, we design a negative prompt environment inference to extract underlying environment information. We further introduce the environment-enhanced invariant subgraph learning to effectively exploit inferred environment embedding, ensuring the robust extraction of causal subgraph in the environment shifts. Lastly, we conduct a comprehensive evaluation of NeGo on real-world datasets and synthetic datasets across domains. NeGo outperforms baselines on nearly all datasets, which verify the effectiveness of our framework.
Lay Summary: Whether artificial intelligence models can remain effective in constantly changing scenarios is a widely recognized challenge. Our research focuses on graph-structured data, such as molecular graphs and social networks. We believe that for models to handle different scenarios, they should not only learn how to produce correct answers but also learn to recognize which answers are wrong. Therefore, we introduce the concept of negative learning. Moreover, what exactly these scenarios entail is still unknown. To address this, we design a mechanism to automatically understand and capture such scenarios. Finally, we conduct experiments on datasets from various scenarios and find that our method achieves the best performance.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Graph learning, out-of-distribution generalization, environment awareness, negative inference, prompt learning.
Submission Number: 4835
Loading