Latent space decomposition into task-specific and domain-specific subspaces for domain adaptation

Takaya Ueda, Ikuko Nishikawa

Published: 2020, Last Modified: 28 Feb 2026IJCNN 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this study, we propose a novel method of acquiring the latent space (composed of task-specific and domain-specific subspaces) of data. The layered neural networks acquire the effective features for a given task in its internal layers as a latent space. If the acquired latent space is relevant only to the task and invariant to the data domain, it can be adapted to a new domain. Therefore, our goal is to acquire the latent space in which all the features relevant to the task are contained in a task-specific subspace, whereas all the features relevant to the data domain are contained in a domain-specific subspace. The method iterates two simple training steps. First, a classifier acquires a task-specific space in its final layer through a conventional supervised learning. Whereas a domain-specific space is obtained as its complementary subspace in the complete latent space. The latter is realized by an autoencoder training using both subspaces. In the second step, two input data are used, and two task-specific representations and two domain-specific representations are exchanged to generate two composite data. Further training of the classifier using the composite data forces each subspace to be more specific to either the task or the domain. Computer experiments are conducted on three famous datasets of handwritten digits as three different domains. The visualization of the obtained task-specific space shows the clusters corresponding to the digit classes, whereas the domain-specific space shows the areas corresponding to the datasets. The domain adaptation to an unlearned dataset using the task-specific representation obtained by our method demonstrates a higher performance than an alternative method.
Loading