JumpStyle: Jump Starting Style Aware Test-Time Domain Generalization

TMLR Paper587 Authors

10 Nov 2022 (modified: 28 Feb 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: The performance of deep networks is quite vulnerable to distribution shifts encountered during test-time, and this applies even for models which have been trained to generalize to unseen domains. Thus, it is imperative that the model updates itself, leveraging the test data in an online manner. In this work, we propose a novel framework for test-time adaptation of deep networks trained in the Domain Generalization (DG) setting. Specifically, we propose two modifications, (i) Jump starting the adaptation using effective initialization and (ii) Style-aware augmentation based pseudo-labelling over the current state-of-the-art approach for test-time adaptation, namely Tent, for the DG task. The proposed framework only assumes access to the trained backbone and is agnostic to the model training process. We demonstrate the effectiveness of the proposed JumpStyle framework on four DG benchmark datasets, namely, PACS, VLCS, Office-Home and Terra-Incognita. Extensive experiments using standard backbones trained using multiple source domains and also the state-of-the-art DG method shows that the proposed framework is generalizable not only across different backbones, but also different training methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Charles_Xu1
Submission Number: 587
Loading