Dodging the Sparse Double DescentDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 12 May 2023CoRR 2023Readers: Everyone
Abstract: This paper presents an approach to addressing the issue of over-parametrization in deep neural networks, more specifically by avoiding the ``sparse double descent'' phenomenon. The authors propose a learning framework that allows avoidance of this phenomenon and improves generalization, an entropy measure to provide more insights on its insurgence, and provide a comprehensive quantitative analysis of various factors such as re-initialization methods, model width and depth, and dataset noise. The proposed approach is supported by experimental results achieved using typical adversarial learning setups. The source code to reproduce the experiments is provided in the supplementary materials and will be publicly released upon acceptance of the paper.
0 Replies

Loading