Early Stopping in Deep Networks: Double Descent and How to Eliminate itDownload PDF

Published: 12 Jan 2021, Last Modified: 22 Oct 2023ICLR 2021 PosterReaders: Everyone
Keywords: early stopping, double descent
Abstract: Over-parameterized models, such as large deep networks, often exhibit a double descent phenomenon, whereas a function of model size, error first decreases, increases, and decreases at last. This intriguing double descent behavior also occurs as a function of training epochs and has been conjectured to arise because training epochs control the model complexity. In this paper, we show that such epoch-wise double descent occurs for a different reason: It is caused by a superposition of two or more bias-variance tradeoffs that arise because different parts of the network are learned at different epochs, and mitigating this by proper scaling of stepsizes can significantly improve the early stopping performance. We show this analytically for i) linear regression, where differently scaled features give rise to a superposition of bias-variance tradeoffs, and for ii) a wide two-layer neural network, where the first and second layers govern bias-variance tradeoffs. Inspired by this theory, we study two standard convolutional networks empirically and show that eliminating epoch-wise double descent through adjusting stepsizes of different layers improves the early stopping performance.
One-sentence Summary: Epoch wise double descent can be explained as a superposition of two or more bias-variance tradeoffs that arise because different parts of the network are learned at different epochs.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) MLI-lab/early_stopping_double_descent](https://github.com/MLI-lab/early_stopping_double_descent)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2007.10099/code)
16 Replies

Loading