Unraveling the Enigma of Double Descent: An In-depth Analysis through the Lens of Learned Feature Space

Published: 16 Jan 2024, Last Modified: 16 Mar 2024ICLR 2024 posterEveryoneRevisionsBibTeX
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: neural network, double descent, classification, interpretability
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Double descent presents a counter-intuitive aspect within the machine learning domain, and researchers have observed its manifestation in various models and tasks. While some theoretical explanations have been proposed for this phenomenon in specific contexts, an accepted theory for its occurring mechanism in deep learning remains yet to be established. In this study, we revisit the phenomenon of double descent and demonstrate that the presence of noisy data strongly influences its occurrence. By comprehensively analysing the feature space of learned representations, we unveil that double descent arises in imperfect models trained with noisy data. We argue that while small and intermediate models before the interpolation threshold follow the traditional bias-variance trade-off, over-parameterized models interpolate noisy samples among robust data thus acquiring the capability to separate the information from the noise. The source code is available at \url{https://github.com/Yufei-Gu-451/double_descent_inference.git}.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Primary Area: learning theory
Submission Number: 287
Loading