Deep Learning without Shortcuts: Shaping the Kernel with Tailored RectifiersDownload PDF

Anonymous

Sep 29, 2021 (edited Nov 18, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: Neural Network Training, Kernel Approximation for Neural Networks, Neural Network Initialization, Generalization
  • Abstract: Training very deep neural networks is still an extremely challenging task. The common solution to this is to add shortcut connections and normalization layers, which are both crucial ingredients in the ResNet architecture. However, there is strong evidence to suggest that ResNets behave more like ensembles of shallower networks than truly deep ones. Recently, it was shown that deep vanilla networks (i.e.~networks without normalization layers or shortcut connections) can be trained as fast as ResNets by applying certain transformations to their activation functions. However, this method (called Deep Kernel Shaping) isn't fully compatible with ReLUs, and produces networks that exhibit significantly more overfitting than ResNets of similar size on ImageNet. In this work, we rectify this situation by developing a new type of transformation which is perfectly compatible with a variant of ReLUs -- Leaky ReLUs. We show in experiments that our method, which introduces negligible extra computational cost, achieves tests accuracies with vanilla deep networks that are competitive with ResNets (of the same width/depth), and significantly higher than those obtained with the Edge of Chaos (EOC) method. And unlike with EOC, the test accuracies we obtain do not get worse with depth.
  • Supplementary Material: zip
9 Replies

Loading