Keywords: graph neural networks, distribution shifts, test-time inference, MLOps
Abstract: Distribution shifts between training and test graphs typically lead to the decreased performance of graph neural networks (GNNs) with suboptimal generalization in real-world applications. Despite advances in graph learning under distribution shifts through designing various model architecture development with customized training strategies, existing solutions can be challenging in practical GNN deployment because they often require significant modifications or retraining of the GNNs. To address such challenges, in this work, we propose a novel method, i.e., Test-Time Graph REBirth, dubbed TT-GREB, to effectively generalize the well-trained GNN models to the test-time graphs under distribution shifts by directly manipulating the test graph data. Concretely, we develop an overall framework designed by two principles, corresponding to two submodules: (1) prototype extractor for re-extracting the environment-invariant features of the test-time graph; and (2) environment refiner for re-fining the environment-varying features to explore the potential shifts. Furthermore, we propose a dual test-time graph contrastive learning objective with an effective iterative optimization strategy to obtain optimal prototype components and environmental components of the test graph. By reassembling these two components, we could obtain a newly reborn test graph, which is better suited for generalization on the well-trained GNN model with shifts in graph distribution. Extensive experiments on real-world graphs under diverse test-time distribution shifts could verify the effectiveness of the proposed method, showcasing its superior ability to manipulate test-time graphs for better GNN generalization ability.
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9300
Loading