Efficient Representation of Low-Dimensional Manifolds using Deep Networks

Ronen Basri, David W. Jacobs

Nov 03, 2016 (modified: Feb 26, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: We consider the ability of deep neural networks to represent data that lies near a low-dimensional manifold in a high-dimensional space. We show that deep networks can efficiently extract the intrinsic, low-dimensional coordinates of such data. Specifically we show that the first two layers of a deep network can exactly embed points lying on a monotonic chain, a special type of piecewise linear manifold, mapping them to a low-dimensional Euclidean space. Remarkably, the network can do this using an almost optimal number of parameters. We also show that this network projects nearby points onto the manifold and then embeds them with little error. Experiments demonstrate that training with stochastic gradient descent can indeed find efficient representations similar to the one presented in this paper.
  • TL;DR: We show constructively that deep networks can learn to represent manifold data efficiently
  • Keywords: Theory, Deep learning
  • Conflicts: weizmann.ac.il, cs.umd.edu, ethz.ch, ens-cachan.fr

Loading