Revisiting "Qualitatively Characterizing Neural Network Optimization Problems"Download PDF

Published: 07 Nov 2020, Last Modified: 16 Mar 2025NeurIPSW 2020: DL-IG PosterReaders: Everyone
Keywords: loss landscape, linear interpolation
TL;DR: In 2015, Goodfellow et al. found that loss was "simple" and "convex" along the linear path between initialization and the optimum found by a neural network; this is not the case in modern settings.
Abstract: We revisit and extend the experiments of Goodfellow et al. (2015), who showed that - for then state-of-the-art networks - "the objective function has a simple, approximately convex shape" along the linear path between initialization and the trained weights. We do not find this to be the case for modern networks on CIFAR-10 and ImageNet. Instead, although loss is roughly monotonically non-increasing along this path, it remains high until close to the optimum. In addition, training quickly becomes linearly separated from the optimum by loss barriers. We conclude that, although Goodfellow et al.'s findings describe the "relatively easy to optimize" MNIST setting, behavior is qualitatively different in modern settings.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/revisiting-qualitatively-characterizing/code)
3 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview