Unsupervised Learning of Slow Features for Data Efficient RegressionDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Representation Learning, Semi-supervised Learning, Data Efficiency, Slowness Principle
Abstract: Research in computational neuroscience suggests that the human brain's unparalleled data efficiency is a result of highly efficient mechanisms to extract and organize slowly changing high level features from continuous sensory inputs. In this paper, we apply this \textit{slowness principle} to a state of the art representation learning method with the goal of performing data efficient learning of down-stream regression tasks. To this end, we propose the \textit{slow variational autoencoder} (S-VAE), an extension to the $\beta$-VAE which applies a temporal similarity constraint to the latent representations. We empirically compare our method to the $\beta$-VAE and the Temporal Difference VAE (TD-VAE), a state-of-the-art method for next frame prediction in latent space with temporal abstraction. We evaluate the three methods against their data-efficiency on down-stream tasks using a synthetic 2D ball tracking dataset and a dataset generated using the DeepMind Lab environment. In both tasks, the proposed method outperformed the baselines both with dense and sparse labeled data. Furthermore, the S-VAE achieved similar performance compared to the baselines with 1/5 to 1/11 of data.
One-sentence Summary: Employing the slowness principle from neuroscience allows constructing features that facilitate data-efficient learning of down-stream tasks.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=-Wpo-H0rcP
12 Replies

Loading