TL;DR: A theoretically supported TTA paradigm that effectively address the efficiency and domain forgetting chanllenges by aligning feature correlations.
Abstract: Deep neural networks often degrade under distribution shifts. Although domain adaptation offers a solution, privacy constraints often prevent access to source data, making Test-Time Adaptation (TTA)—which adapts using only unlabeled test data—increasingly attractive. However, current TTA methods still face practical challenges: (1) a primary focus on instance-wise alignment, overlooking CORrelation ALignment (CORAL) due to missing source correlations; (2) complex backpropagation operations for model updating, resulting in overhead computation and (3) domain forgetting. To address these challenges, we provide a theoretical analysis to investigate the feasibility of **T**est-time **C**orrelation **A**lignment (**TCA**), demonstrating that correlation alignment between high-certainty instances and test instances can enhance test performances with a theoretical guarantee. Based on this, we propose two simple yet effective algorithms: LinearTCA and LinearTCA+. LinearTCA applies a simple linear transformation to achieve both instance and correlation alignment without additional model updates, while LinearTCA+ serves as a plug-and-play module that can easily boost existing TTA methods. Extensive experiments validate our theoretical insights and show that TCA methods significantly outperforms baselines across various tasks, benchmarks and backbones. Notably, LinearTCA achieves higher accuracy with only 4\% GPU memory and 0.6\% computation time compared to the best TTA baseline. It also outperforms existing methods on CLIP over 1.86\%. Code: https://github.com/youlj109/TCA
Lay Summary: In this work, we propose Test-time Correlation Alignment (TCA), a theoretically grounded and efficient approach to Test-Time Adaptation (TTA) that aligns feature correlations between high-confidence test samples and the test domain—without requiring access to source data.
TCA addresses several key challenges in existing TTA methods:
(1) the neglect of correlation alignment due to missing source statistics,
(2) high computational overhead caused by backpropagation-based updates, and
(3) domain forgetting during adaptation.
We begin by exploring the feasibility of TCA through two central questions:
(1) Can we construct a pseudo-source correlation that approximates the true source correlation?
(2) Can this enable effective adaptation at test time?
To answer these, we present a theoretical analysis showing that aligning correlations between high-certainty and test instances can provably improve test-time performance.
Based on this insight, we propose two simple yet effective methods: LinearTCA and LinearTCA⁺.
- LinearTCA performs both instance-and correlation-level alignment via a lightweight linear transformation, without modifying model parameters.
- LinearTCA⁺ serves as a plug-and-play module that enhances existing TTA methods with minimal effort.
Experimental results demonstrate that LinearTCA achieves strong standalone performance, while LinearTCA⁺ consistently boosts other TTA approaches across diverse settings.
Further analysis provides insights into the applicability and limitations of LinearTCA, offering valuable guidance for future research.
Link To Code: https://github.com/youlj109/TCA
Primary Area: General Machine Learning->Transfer, Multitask and Meta-learning
Keywords: Test-time adaptation, Correlation alignment
Submission Number: 14963
Loading