Big Neural Networks Waste CapacityDownload PDF

19 Apr 2024 (modified: 17 Jan 2013)ICLR 2013 conference submissionReaders: Everyone
Decision: conferenceOral-iclr2013-workshop
Abstract: This article exposes the failure of some big neural networks to leverage added capacity to reduce underfitting. Past research suggest diminishing returns when increasing the size of neural networks. Our experiments on ImageNet LSVRC-2010 show that this may be due to the fact that bigger networks underfit the training objective, sometimes performing worse on the training set than smaller networks. This suggests that the optimization method - first order gradient descent - fails at this regime. Directly attacking this problem, either through the optimization method or the choices of parametrization, may allow to improve the generalization error on large datasets, for which a large capacity is required.
9 Replies

Loading