Keywords: causal representation learning, disentanglement, autoencoders
Abstract: Autoencoders have played a crucial role in the field of representation learning since its inception, proving to be a flexible learning scheme able to accommodate various notions of optimality of the representation. The now established idea of disentanglement and the recently popular perspective of causality in representation learning identify modularity and robustness to be essential characteristics of the optimal representation. In this work, we show that the current conceptual tools available to assess the quality of the representation against these criteria (e.g. latent traversals or disentanglement metrics) are inadequate. In this regard, we introduce the notion of \emph{interventional consistency} of a representation and argue that it is a desirable property of any disentangled representation. We develop a general training scheme for autoencoders that takes into account interventional consistency in the optimality condition. We present empirical evidence toward the validity of the approach on three different autoencoders, namely standard autoencoders (AE), variational autoencoders (VAE) and structural autoencoders (SAE).
Another key finding in this work is that differentiating between information and structure in the latent space of autoencoders can increase the modularity and interpretability of the resulting representation.
One-sentence Summary: Study of the interventional consistency of autoencoders for causal representation learning.
13 Replies
Loading