Activate and Adapt: A Two-Stage Framework for Open-Set Model Adaptation

TMLR Paper4498 Authors

17 Mar 2025 (modified: 22 May 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: The ability of generalizing to new environments is critical for deep neural networks. Most existing works presume that the training and test data share an identical label set, overlooking the potential presence of new classes in test data. In this paper, we tackle a practical and challenging problem: Open-Set Model Adaptation (OSMA). OSMA aims to train a model on the source domain, which contains only known class data, and then adapt the trained model to the distribution-shifted target domain to classify known class data while identifying new class data. In this context, we face two challenges: (1) enabling the model to recognize new classes using only the known class data from the source domain during training, and (2) adapting the source-trained model to the target domain that contains new class data. To address these challenges, we propose a novel and universal two-stage framework named Activate and Adapt (ADA). In the training stage, we extract potential new class information hidden within the rich semantics of the source domain data to enable the model to identify new class data. Additionally, to retain source domain information while preserving data privacy, we condense the source domain data into a small dataset, facilitating the subsequent adaptation phase. In the test stage, we adaptively adjust the source-trained model to the target domain with new classes by infusing the style of target data into the condensed dataset, and decoupling domain alignment for known and new classes. Experiments across three standard benchmarks demonstrate that ADA surpasses previous methods in both online and offline settings.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Vincent_Dumoulin1
Submission Number: 4498
Loading