Keywords: test time adaptation, continual learning
TL;DR: We propose a test-time adaptation method that leverages the intrinsic spectral structures of pretrained Vision Transformers, addressing underexplored challenges in both TTA and CTTA.
Abstract: Test-time adaptation (TTA) has been widely explored to prevent performance degradation when test data differ from the training distribution.
However, fully leveraging the rich representations of large pretrained models with minimal parameter updates remains underexplored.
In this paper, we propose Intrinsic Mixture of Spectral Experts (IMSE) that leverages the spectral experts inherently embedded in Vision Transformers.
We decompose each linear layer via singular value decomposition (SVD) and adapt only the singular values, while keeping the singular vectors fixed.
We further identify a key limitation of entropy minimization in TTA: it often induces feature-collapse, causing the model to rely on domain-specific features rather than class-discriminative features.
To address this, we propose a diversity maximization loss based on expert–input alignment, which encourages diverse utilization of spectral experts during adaptation.
In the continual test-time adaptation (CTTA) scenario, beyond preserving pretrained knowledge, it is crucial to retain and reuse knowledge from previously observed domains. We introduce Domain-Aware Spectral Code Retrieval, which estimates input distributions to detect domain shifts, and retrieves adapted singular values for rapid adaptation.
Consequently, our method achieves state-of-the-art performance on various distribution-shift benchmarks under the TTA setting.
In CTTA and Gradual CTTA, it further improves accuracy by 3.4 percentage point (pp) and 2.4 pp, respectively, while requiring 385 times fewer trainable parameters.
Our code is available in https://github.com/baek85/IMSE.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 11595
Loading