Optimizing Activations Beyond Entropy Minimization for Test-Time Adaptation of Graph Neural Networks

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: test-time adaptation, batch normalization, graph neural network, energy-based model
TL;DR: We introduce a data-driven two-step TTA framework for GNNs. This approach first adapts BN layer statistics to the test data distribution. Then it refines BN layer parameters using a joint energy-based model.
Abstract: Test-time adaptation for classification models involves optimizing classifiers through self-supervised learning without labeled training samples. Existing methods often rely on entropy minimization as the optimization objective, which indeed addresses the model performance connections with prediction confidence or representations amenable to cluster structure. However, due to the lack of ground truth in training samples, test-time adaptation, as an effective way to deal with the shifting dataset distributions or domains, can sometimes lead to model collapse. In this paper, we focus on optimizing activations in batch normalization (BN) layers for test-time adaptation of graph neural networks (GNNs). Unlike many entropy minimization methods prone to catastrophic model collapse, our approach leverages pseudo-labels of test samples to mitigate the potential forgetting of training data. We optimize activations in BN by a two-step process. First, we identify weights and masks for the empirical batch mean and variance of both training and test samples. Subsequently, we refine BN's scale and shift parameters using a reformulated loss function with an energy-based model for improved generalization. Empirical evaluation across seven challenging datasets demonstrates the superior performance of our method compared to state-of-the-art test-time adaptation approaches.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9520
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview