Nonparametric Neural Networks

George Philipp, Jaime G. Carbonell

Nov 05, 2016 (modified: Mar 02, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: Automatically determining the optimal size of a neural network for a given task without prior information currently requires an expensive global search and training many networks from scratch. In this paper, we address the problem of automatically finding a good network size during a single training cycle. We introduce {\it nonparametric neural networks}, a non-probabilistic framework for conducting optimization over all possible network sizes and prove its soundness when network growth is limited via an $\ell_p$ penalty. We train networks under this framework by continuously adding new units while eliminating redundant units via an $\ell_2$ penalty. We employ a novel optimization algorithm, which we term ``Adaptive Radial-Angular Gradient Descent'' or {\it AdaRad}, and obtain promising results.
  • TL;DR: We automatically set the size of an MLP by adding and removing units during training as appropriate.
  • Conflicts:
  • Keywords: Deep learning, Supervised Learning