CODA: Generalizing to Open and Unseen Domains with Compaction and Disambiguation

Published: 21 Sept 2023, Last Modified: 26 Dec 2023NeurIPS 2023 spotlightEveryoneRevisionsBibTeX
Keywords: Domain generalization, domain shift, open class, source compaction, target disambiguation
TL;DR: We propose a principled framework (CODA) for a new and challenging domain generalization setting where both domain shift and open class occur on test data.
Abstract: The generalization capability of machine learning systems degenerates notably when the test distribution drifts from the training distribution. Recently, Domain Generalization (DG) has been gaining momentum in enabling machine learning models to generalize to unseen domains. However, most DG methods assume that training and test data share an identical label space, ignoring the potential unseen categories in many real-world applications. In this paper, we delve into a more general but difficult problem termed Open Test-Time DG (OTDG), where both domain shift and open class may occur on the unseen test data. We propose Compaction and Disambiguation (CODA), a novel two-stage framework for learning compact representations and adapting to open classes in the wild. To meaningfully regularize the model's decision boundary, CODA introduces virtual unknown classes and optimizes a new training objective to insert unknowns into the latent space by compacting the embedding space of source known classes. To adapt target samples to the source model, we then disambiguate the decision boundaries between known and unknown classes with a test-time training objective, mitigating the adaptivity gap and catastrophic forgetting challenges. Experiments reveal that CODA can significantly outperform the previous best method on standard DG datasets and harmonize the classification accuracy between known and unknown classes.
Supplementary Material: pdf
Submission Number: 706