Fast Test-Time Adaptation Using HintsDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: test-time, adaptation, robustness, distribution shifts
Abstract: We propose a framework for adapting neural networks to distribution shifts at test-time. The primary idea is to leverage proper adaptation objectives based on known general properties of the target task, e.g. multi-view geometry for 3D tasks, or hierarchical structure for semantic tasks. These properties can be instantiated as adaptation signals at test-time, which we refer to as "hints". These hints are robust to distribution shifts, thus, they make adaptation more reliable compared to existing test-time adaptation methods, e.g. entropy minimization. Next, we show that this optimization during test-time can be amortized using a side-network, thus, making the adaptation orders of magnitude faster. We call this variant of test-time adaption Rapid Network Adaptation (RNA). We demonstrate consistent improvements over the baselines on diverse tasks (depth, optical flow, semantic segmentation, classification), datasets (Taskonomy, Replica, ScanNet, COCO, ImageNet) and distribution shifts (Common Corruptions, 3D Common Corruptions, cross-datasets).
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
TL;DR: We propose a method for fast test-time adaptation to distribution shifts.
Supplementary Material: zip
5 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview