Mildly Overparametrized Neural Nets can Memorize Training Data EfficientlyDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We show even mildly overparametrized networks (much smaller than existing results) can be trained to perfectly memorize training data.
Abstract: It has been observed \citep{zhang2016understanding} that deep neural networks can memorize: they achieve 100\% accuracy on training data. Recent theoretical results explained such behavior in highly overparametrized regimes, where the number of neurons in each layer is larger than the number of training samples. In this paper, we show that neural networks can be trained to memorize training data perfectly in a mildly overparametrized regime, where the number of parameters is just a constant factor more than the number of training samples, and the number of neurons is much smaller.
Code: https://www.dropbox.com/sh/sirnp8dmxwtivx8/AACfHgVzLXFSkvuTPsLNCkH-a?dl=0
Keywords: nonconvex optimization, optimization landscape, overparametrization
Original Pdf: pdf
7 Replies

Loading