RDumb: A simple approach that questions our progress in continual test-time adaptation

Published: 21 Sept 2023, Last Modified: 19 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: test time adaptation, continual adaptation, benchmarking, imagenet-c, imagenet classification, robustness, continual learning, imagenet benchmark
TL;DR: We rigourosly benchmark many test time adaptation methods and find a simple baseline approach to be superior.
Abstract: Test-Time Adaptation (TTA) allows to update pre-trained models to changing data distributions at deployment time. While early work tested these algorithms for individual fixed distribution shifts, recent work proposed and applied methods for continual adaptation over long timescales. To examine the reported progress in the field, we propose the Continually Changing Corruptions (CCC) benchmark to measure asymptotic performance of TTA techniques. We find that eventually all but one state-of-the-art methods collapse and perform worse than a non-adapting model, including models specifically proposed to be robust to performance collapse. In addition, we introduce a simple baseline, "RDumb", that periodically resets the model to its pretrained state. RDumb performs better or on par with the previously proposed state-of-the-art in all considered benchmarks. Our results show that previous TTA approaches are neither effective at regularizing adaptation to avoid collapse nor able to outperform a simplistic resetting strategy.
Submission Number: 12030
Loading