Learning Compact Networks via Adaptive Network RegularizationDownload PDF

Published: 07 Nov 2018, Last Modified: 05 May 2023NIPS 2018 Workshop CDNNRIA Blind SubmissionReaders: Everyone
Abstract: Deep neural networks typically feature a fixed architecture, where the number of units per layer are treated as hyperparameters and tuned during training. Recently, strategies for training adaptive neural networks without a fixed architecture have seen renewed interest. In this paper, we employ a simple regularizer on the number of hidden units in the networks, which we refer to as adaptive network regularization (ANR). This method places a penalty on the number of hidden units per layer, designed to encourage compactness and flexibility of the network architecture. This penalty acts as the sole tuning parameter over the network size, increasing simplicity during training. We describe a training strategy that grows the number of units during training, and show on several benchmark datasets that our model yields architectures that are smaller than those obtained when tuning the number of hidden units on a standard fixed architecture. Along with smaller architectures, we show on multiple datasets that our algorithm performs comparable to or better than fixed architectures learned via grid-searching over the hyperparameters. We motivate this model using small-variance asymptotics---a Bayesian neural network with a Poisson number of units per layer becomes our model in the small-variance limit.
Keywords: adaptive network regularization, bayesian neural networks, regularization
5 Replies

Loading