Intriguing Properties of Randomly Weighted Networks: Generalizing while Learning Next to NothingDownload PDF

25 Jan 2018ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: Training deep neural networks results in strong learned representations that show good generalization capabilities. In most cases, training involves iterative modification of all weights inside the network via back-propagation. In this paper, we propose to take an extreme approach and fix \emph{almost all weights} of a deep convolutional neural network in their randomly initialized values, allowing only a small portion to be learned. As our experiments show, this often results in performance which is on par with the performance of learning all weights. The implications of this intriguing property or deep neural networks are discussed and we suggest ways to harness it to create more robust representations.
TL;DR: Convnets can achieve good performance even when only a fraction of parameters are learned.
Keywords: Random Networks, Extreme Learning, Compact Representations
7 Replies

Loading