Revisiting "Qualitatively Characterizing Neural Network Optimization Problems"Download PDF

Published: 07 Nov 2020, Last Modified: 05 May 2023NeurIPSW 2020: DL-IG PosterReaders: Everyone
Keywords: loss landscape, linear interpolation
TL;DR: In 2015, Goodfellow et al. found that loss was "simple" and "convex" along the linear path between initialization and the optimum found by a neural network; this is not the case in modern settings.
Abstract: We revisit and extend the experiments of Goodfellow et al. (2015), who showed that - for then state-of-the-art networks - "the objective function has a simple, approximately convex shape" along the linear path between initialization and the trained weights. We do not find this to be the case for modern networks on CIFAR-10 and ImageNet. Instead, although loss is roughly monotonically non-increasing along this path, it remains high until close to the optimum. In addition, training quickly becomes linearly separated from the optimum by loss barriers. We conclude that, although Goodfellow et al.'s findings describe the "relatively easy to optimize" MNIST setting, behavior is qualitatively different in modern settings.
3 Replies

Loading