Superseding Model Scaling by Penalizing Dead Units and Points with Separation ConstraintsDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Dead Point, Dead Unit, Model Scaling, Separation Constraints, Dying ReLU, Constant Width, Deep Neural Networks, Backpropagation
TL;DR: We propose using a set of constraints to penalize dead neurons and points in order to train very deep networks of constant width.
Abstract: In this article, we study a proposal that enables to train extremely thin (4 or 8 neurons per layer) and relatively deep (more than 100 layers) feedforward networks without resorting to any architectural modification such as Residual or Dense connections, data normalization or model scaling. We accomplish that by alleviating two problems. One of them are neurons whose output is zero for all the dataset, which renders them useless. This problem is known to the academic community as \emph{dead neurons}. The other is a less studied problem, dead points. Dead points refers to data points that are mapped to zero during the forward pass of the network. As such, the gradient generated by those points is not propagated back past the layer where they die, thus having no effect in the training process. In this work, we characterize both problems and propose a constraint formulation that added to the standard loss function solves them both. As an additional benefit, the proposed method allows to initialize the network weights with constant or even zero values and still allowing the network to converge to reasonable results. We show very promising results on a toy, MNIST, and CIFAR-10 datasets.
Code: https://www.dropbox.com/s/kl96825sae12zkc/sep_cons.zip?dl=0
Original Pdf: pdf
18 Replies

Loading