Learning Switchable Representation with Masked Decoding and Sparse EncodingDownload PDF

Published: 21 Jul 2022, Last Modified: 05 May 2023SCIS 2022 PosterReaders: Everyone
Keywords: Domain adaptation, unsupervised representation learning, identifiability, sparseness
TL;DR: We study the way to identifiably learn the underlying “domain-private vs domain-shared structure” of the dataset by training a masked decoder and sparsity-regularized encoder
Abstract: In this study, we explore the unsupervised learning based on private/shared factor decomposition, which decomposes the latent space into private factors that vary only in a specific domain the shared factors that vary in all domains. We study when/how we can force the model to respect the true private/shared factor decomposition that underlies the dataset. We show that, when we train a masked decoder and an encoder with sparseness regularization in the latent space, we can identify the true private/shared decomposition up to mixing within each component. We empirically confirm this result and study the efficacy of this training strategy as a representation learning method.
Confirmation: No
0 Replies

Loading