REORIENTING THE FROZEN SPACE: TRAINING-FREE TEST-TIME ADAPTATION BY GEOMETRIC TRANSFORMATION

18 Sept 2025 (modified: 12 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Training-free test-time adaptation, vision-language model, clip
Abstract: With the widespread application of Vision-Language Models (VLMs) in downstream tasks, test-time adaptation methods based on VLMs, particularly the training-free paradigm, have been gaining increasing attention due to their advantages in handling distribution shifts during testing. Yet, existing training-free methods remain constrained by the fixed geometry of pretrained feature spaces, which limits class separability. We propose SOBA, a training-free TTA method that edits decision geometry by re-expressing class prototypes in a test-induced orthogonal basis. SOBA maintains a lightweight dynamic queue of high-confidence test samples, derives an orthogonal basis via singular value decomposition, and aligns prototypes to the most discriminative directions of the test distribution. This simple adjustment enlarges inter-class margins, sharpens decision boundaries, and improves the recognition of semantically similar categories—without modifying features, prompts, or model parameters. Extensive experiments on multiple benchmarks demonstrate that SOBA achieves state-of-the-art accuracy and superior efficiency compared to both training-free and backprop-based TTA methods.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 12176
Loading