Avoiding Structural Pitfalls: Self-Supervised Low-Rank Feature Tuning for Graph Test-Time Adaptation
Abstract: Pre-trained graph neural networks (GNNs) have demonstrated significant success in leveraging large-scale graph data to learn transferable representations. However, their performance often degrades under distribution shifts, particularly in real-world scenarios where test labels are unavailable. To address this challenge, we propose Graph Optimization via Augmented Transformations (GOAT), a novel self-supervised test-time tuning paradigm that adapts pre-trained GNNs to distribution-shifted test data by focusing exclusively on node feature transformations. By avoiding complex and often suboptimal graph structure transformations, GOAT overcomes the limitations of existing data-centric methods.
To further address the issue of transformation collapse, where feature transformations converge to trivial solutions, we introduce a parameter-efficient low-rank adapter that generates diverse transformations tailored to individual input graphs. This design not only enhances adaptation performance but also improves interpretability by avoiding modifications to the graph structure. Through extensive experiments on six real-world datasets with diverse distribution shifts, we demonstrate that GOAT achieves consistent performance improvements across different pre-trained GNN backbones, outperforming state-of-the-art test-time adaptation methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=Ithe9abTdk
Changes Since Last Submission: This submission is a revised version of a previously desk-rejected manuscript (TMLR Forum **#Ithe9abTdk**). We have addressed the formatting issues to fully comply with TMLR's style requirements and improved the paper accordingly.
Assigned Action Editor: ~Mark_Coates1
Submission Number: 5172
Loading