Statistical Characterization of Deep Neural Networks and their Sensitivity

Antoine Labatie

Sep 27, 2018 (modified: Oct 10, 2018) ICLR 2019 Conference Withdrawn Submission readers: everyone
  • Abstract: Despite their ubiquity, it remains an active area of research to fully understand deep neural networks (DNNs) and the reasons of their empirical success. We contribute to this effort by introducing a principled approach to statistically characterize DNNs and their sensitivity. By distinguishing between randomness from input data and from model parameters, we study how central and non-central moments of network activation and sensitivity evolve during propagation. Thereby, we provide novel statistical insights on the hypothesis space of input-output mappings encoded by different architectures. Our approach applies both to fully-connected and convolutional networks and incorporates most ingredients of modern DNNs: rectified linear unit (ReLU) activation, batch normalization, skip connections.
  • Keywords: Statistics, Sensitivity, Exploding Gradient, Convolutional Neural Networks, Residual Neural Networks, Batch Normalization
0 Replies

Loading