The intriguing role of module criticality in the generalization of deep networks

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • TL;DR: We study the phenomenon that some modules of DNNs are more \emph{critical} than others. Our analysis leads us to propose a complexity measure, that is able to explain the superior generalization performance of some architectures over others.
  • Abstract: We study the phenomenon that some modules of deep neural networks (DNNs) are more \emph{critical} than others. Meaning that rewinding their parameter values back to initialization, while keeping other modules fixed at the trained parameters, results in a large drop in the network's performance. Our analysis reveals interesting properties of the loss landscape which leads us to propose a complexity measure, called {\em module criticality}, based on the shape of the valleys that connects the initial and final values of the module parameters. We formulate how generalization relates to the module criticality, and show that this measure is able to explain the superior generalization performance of some architectures over others, whereas earlier measures fail to do so.
  • Keywords: Module Criticality Phenomenon, Complexity Measure, Deep Learning
0 Replies

Loading