Supplementary Material: zip
Keywords: variational autoencoder, benign overfitting, self-supervised, time-series, neuroscience
TL;DR: We show that modifying the VAE to predict the next point in time and using model selection based on distance between neighboring points in time in encoding space is effective in reducing benign overfitting
Abstract: Variational autoencoders (VAEs) have been used extensively to discover low-dimensional latent factors governing neural activity and animal behavior. However, without careful model selection, the uncovered latent factors may reflect noise in the data rather than true underlying features, rendering such representations unsuitable for scientific interpretation. Existing solutions to this problem involve introducing additional measured variables or data augmentations specific to a particular data type. We propose a VAE architecture that predicts the next point in time and show that it mitigates the learning of spurious features. In addition, we introduce a model selection metric based on smoothness over time in the latent space. We show that together these two constraints on VAEs to be smooth over time produce robust latent representations and faithfully recover latent factors on synthetic datasets.
Track: Proceedings Track
Submission Number: 70
Loading