Domain Generalization via Feature Variation DecorrelationOpen Website

2021 (modified: 16 Nov 2022)ACM Multimedia 2021Readers: Everyone
Abstract: Domain generalization aims to learn a model that generalizes to unseen target domains from multiple source domains. Various approaches have been proposed to address this problem by adversarial learning, meta-learning, and data augmentation. However, those methods have no guarantee for target domain generalization. Motivated by an observation that the class-irrelevant information of sample in the form of semantic variation would lead to negative transfer, we propose to linearly disentangle the variation out of sample in feature space and impose a novel class decorrelation regularization on the feature variation. By doing so, the model would focus on the high-level categorical concept for model prediction while ignoring the misleading clue from other variations (including domain changes). As a result, we achieve state-of-the-art performances over all of widely used domain generalization benchmarks, namely PACS, VLCS, Office-Home, and Digits-DG with large margins. Further analysis reveals our method could learn a better domain-invariant representation, and decorrelated feature variation could successfully capture semantic meaning.
0 Replies

Loading