Abstract: Open-Set Domain Adaptation (OSDA) aims at adapting a model trained on a labelled source domain, to an unlabeled target domain that is corrupted with unknown classes. The key challenge inherent to this open-set setting is therefore how best to avoid the negative transfer incurred by unknown classes during model adaptation. Most existing works tackle this challenge by simply pushing the entire unknown classes away. In this paper, we take a different stance – instead of addressing these unknown classes as a single entity, we “reserve” in-between spaces for their subsets in the learned embedding. Our key finding is that the inter-class relations learned off the source domain, can help to enforce class separations in the target domain – thereby reserving spaces for unknown classes. More specifically, we first prep the “reservation” by tightening the known-class representations while enlarging their inter-class margin. We then learn soft-label prototypes in the source domain to facilitate the discrimination of known and unknown samples in the target domain. It follows that these two steps are iterated at each epoch in a mutually beneficial manner – better discrimination of unknown samples helps with space reservation, and vice versa. We show state-of-the-art results on four standard OSDA datasets, Office-31, Office-Home, VisDA and ImageCLEF, and conduct further analysis to help understand our method. Codes are available at: https://github.com/PRIS-CV/Reserve_to_Adapt
Loading