Uncovering the Risk of Model Collapsing in Self-Supervised Continual Test-time Adaptation

Published: 13 Oct 2024, Last Modified: 02 Dec 2024NeurIPS 2024 Workshop SSLEveryoneRevisionsBibTeXCC BY 4.0
Keywords: test-time adaptation, continual adaptation, performance degradation, self-supervised learning, model collapse
Abstract: Current test-time adaptation (TTA) approaches have emerged as a promising solution to tackle the continual domain shift in machine learning research. However, updating model parameters at test time, via self-supervised learning (SSL) on unlabeled testing data can open the door to unforeseen security vulnerabilities. This work highlights two such scenarios. The first comes from a **recurring TTA** scenario, where an extensive testing stream reveals the risk of lifelong performance degradation of a TTA model after rounds of adaptation. The second is the **Reusing Incorrect Prediction (RIP)**, demonstrating a surprisingly simple scheme that attackers can intentionally submit malicious samples to silently degrade TTA model performance. We extensively benchmark the performance of the most recent continual TTA approaches when facing these risks, provide theoretical insights into this phenomenon, and propose best practices that can potentially strengthen the robustness when adopting SSL in future continual TTA systems.
Submission Number: 60
Loading