Abstract: Adapting a trained model to perform satisfactorily on continually changing test environments is an important and challenging task. In this work, we propose a novel framework, SANTA, which aims to satisfy the following characteristics required for online adaptation: 1) can work effectively for different (even small) batch sizes; 2) should continue to work well on the source domain; 3) should have minimal tunable hyperparameters and storage requirements. Given a pre-trained network trained on source domain data, the proposed framework modifies the affine parameters of the batch normalization layers using source anchoring based self-distillation. This ensures that the model incorporates knowledge from the newly encountered domains, without catastrophically forgetting the previously seen domains. We also propose a source-prototype driven contrastive alignment to ensure natural grouping of the target samples, while maintaining the already learnt semantic information. Extensive evaluation on three benchmark datasets under challenging settings justify the effectiveness of SANTA for real-world applications. Code here: https://github.com/goirik-chakrabarty/SANTA
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: All changes suggested by the reviewer noN9 have been incorporated. Deanonymized the submission and added acknowledgements. Further, experiments on the Continuously Changing Corruptions (CCC) benchmark have been added to the Appendix.
We thank all the reviewers and the action editor for their comments and insights. It has greatly improved the quality of our work.
Code: https://github.com/goirik-chakrabarty/SANTA
Assigned Action Editor: ~Eleni_Triantafillou1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1430
Loading