On Disentanglement in Gaussian Process Variational AutoencodersDownload PDF

Published: 29 Jan 2022, Last Modified: 20 Oct 2024AABI 2022 PosterReaders: Everyone
Keywords: Variational Autoencoders, Gaussian processes, Disentanglement
TL;DR: We show that GP-VAE models already have strong disentanglement properties due to their prior and can outperform many standard disentanglement models.
Abstract: Complex multivariate time series arise in many fields, ranging from computer vision to robotics or medicine. Often we are interested in the independent underlying factors that give rise to the high-dimensional data we are observing. While many models have been introduced to learn such \emph{disentangled} representations, only few attempt to explicitly exploit the structure of sequential data. We investigate the disentanglement properties of Gaussian process variational autoencoders, a class of models recently introduced that have been successful in different tasks on time series data. Our model exploits the temporal structure of the data by modeling each latent channel with a Gaussian process prior and employing a structured variational distribution that can capture dependencies in time. We show that such priors can improve disentanglement and demonstrate the competitiveness of our approach against state-of-the-art unsupervised and weakly-supervised disentanglement methods on a benchmark task. Moreover, we provide evidence that we can learn disentangled representations on real-world medical time series data.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/on-disentanglement-in-gaussian-process/code)
1 Reply

Loading