Semantic Self-adaptation: Enhancing Generalization with a Single Sample

Published: 19 Jul 2023, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The lack of out-of-domain generalization is a critical weakness of deep networks for semantic segmentation. Previous studies relied on the assumption of a static model, i. e., once the training process is complete, model parameters remain fixed at test time. In this work, we challenge this premise with a self-adaptive approach for semantic segmentation that adjusts the inference process to each input sample. Self-adaptation operates on two levels. First, it fine-tunes the parameters of convolutional layers to the input image using consistency regularization. Second, in Batch Normalization layers, self-adaptation interpolates between the training and the reference distribution derived from a single test sample. Despite both techniques being well known in the literature, their combination sets new state-of-the-art accuracy on synthetic-to-real generalization benchmarks. Our empirical study suggests that self-adaptation may complement the established practice of model regularization at training time for improving deep network generalization to out-of-domain data. Our code and pre-trained models are available at https://github.com/visinf/self-adaptive.
Submission Length: Regular submission (no more than 12 pages of main content)
Supplementary Material: zip
Changes Since Last Submission: - The algorithm block now appears in Section 3 (reviewer eXNC). - Section 5.4 now features the analysis of hyperparameter sensitivity with cross-references to related discussions (Reviewers ycjh and TqUP). - Appendix B.4 now includes experimental results on ACDC, extending the set of target domains from the main text (Reviewer TqUP).
Code: https://github.com/visinf/self-adaptive
Assigned Action Editor: ~Wei_Liu3
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 957
Loading