Style-Content Disentanglement Under Conditional Shift

ICLR 2024 Workshop DMLR Submission32 Authors

Published: 04 Mar 2024, Last Modified: 02 May 2024DMLR @ ICLR 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: style-content disentanglement, weakly-supervised learning, distributional shift, conditional shift, variational autoencoders, causal inference
TL;DR: We perform style-content disentanglement on datasets with conditional shift by enforcing marginal independence among the content representations for each data environment.
Abstract: We propose a novel representation learning method called the Context-Aware Variational Autoencoder (CxVAE). Our model can perform style-content disentanglement on datasets with conditional shift. Conditional shift occurs when the distribution of a target variable $\mathbf{y}$ conditional on the input observation $\mathbf{x}$ --- $p(\mathbf{y}|\mathbf{x})$ --- changes across data environments (i.e. $p_i(\mathbf{y}|\mathbf{x}) \neq p_j(\mathbf{y}|\mathbf{x})$, where $i,j$ are two different environments). We introduce two novel style-content disentanglement datasets to show empirically that existing methods fail to disentangle under conditional shift. We propose CxVAE, a model that overcomes this limitation by enforcing independence across the content variables inferred from each environment. Our model presents two innovations: a context-aware encoder and a content adversarial loss. We use a specially designed experiment to show empirically that these design choices directly cause an improvement of our model's performance on datasets with conditional shift.
Primary Subject Area: Impact of data bias, variance, and drifts
Paper Type: Research paper: up to 8 pages
Participation Mode: In-person
Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Submission Number: 32
Loading