LaFea: Learning Latent Representation Beyond Feature for Universal Domain Adaptation

Published: 01 Jan 2023, Last Modified: 30 Sept 2024IEEE Trans. Circuits Syst. Video Technol. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Universal Domain Adaptation (UniDA) is a recent advent problem that aims to transfer the knowledge from the source domain to the target domain without any prior knowledge on label sets. The main challenge is to separate common samples from private samples in the target domain. In general, existing methods achieve this goal by performing domain adaptation only on the features extracted by the backbone networks. However, solely relying on the learning of the backbone network may not fully exploit the effectiveness of features, due to that 1) the discrepancy between two domains can naturally distract the learning of backbone network and 2) the irrelevant content of samples (e. g., backgrounds) likely goes through the backbone network, and accordingly may hinder the learning of domain-informative features. To this end, we describe a new method to provide extra guidance to the learning of the backbone network based on the latent representation beyond features (LaFea). We are motivated by the fact that the latent representation can be learned to contain the domain-relevant information scattered in features, and the learning of this latent representation can naturally promote the effectiveness of corresponding features in return. To achieve this goal, we develop a simple GAN-style architecture to transform features into the latent representation and propose new objectives to adversarially learn this representation. It should be noted that the latent representation only serves as an auxiliary in training, but it is not needed in inference. Extensive experiments on four datasets corroborate the superiority of our method compared to the state-of-the-arts.
Loading