Abstract: We present Spectral Inference Networks, a framework for learning eigenfunctions of linear operators by stochastic optimization. Spectral Inference Networks generalize Slow Feature Analysis to generic symmetric operators, and are closely related to Variational Monte Carlo methods from computational physics. As such, they can be a powerful tool for unsupervised representation learning from video or graph-structured data. We cast training Spectral Inference Networks as a bilevel optimization problem, which allows for online learning of multiple eigenfunctions. We show results of training Spectral Inference Networks on problems in quantum mechanics and feature learning for videos on synthetic datasets. Our results demonstrate that Spectral Inference Networks accurately recover eigenfunctions of linear operators and can discover interpretable representations from video in a fully unsupervised manner.
Keywords: spectral learning, unsupervised learning, manifold learning, dimensionality reduction
TL;DR: We show how to learn spectral decompositions of linear operators with deep learning, and use it for unsupervised learning without a generative model.
Code: [![github](/images/github_icon.svg) deepmind/spectral_inference_networks](https://github.com/deepmind/spectral_inference_networks) + [![Papers with Code](/images/pwc_icon.svg) 1 community implementation](https://paperswithcode.com/paper/?openreview=SJzqpj09YQ)
Data: [Arcade Learning Environment](https://paperswithcode.com/dataset/arcade-learning-environment)
7 Replies
Loading