A Functional Characterization of Randomly Initialized Gradient Descent in Deep ReLU Networks

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • Keywords: Inductive Bias, Generalization, Interpretability, Functional Characterization, Loss Surface, Initialization
  • TL;DR: A functional approach reveals that flat initialization, preserved by gradient descent, leads to generalization ability.
  • Abstract: Despite their popularity and successes, deep neural networks are poorly understood theoretically and treated as 'black box' systems. Using a functional view of these networks gives us a useful new lens with which to understand them. This allows us us to theoretically or experimentally probe properties of these networks, including the effect of standard initializations, the value of depth, the underlying loss surface, and the origins of generalization. One key result is that generalization results from smoothness of the functional approximation, combined with a flat initial approximation. This smoothness increases with number of units, explaining why massively overparamaterized networks continue to generalize well.
  • Original Pdf:  pdf
0 Replies