Revisiting "Qualitatively Characterizing Neural Network Optimization Problems"Download PDF

Oct 19, 2020 (edited Dec 09, 2020)NeurIPS 2020 Workshop DL-IG Blind SubmissionReaders: Everyone
  • Keywords: loss landscape, linear interpolation
  • TL;DR: In 2015, Goodfellow et al. found that loss was "simple" and "convex" along the linear path between initialization and the optimum found by a neural network; this is not the case in modern settings.
  • Abstract: We revisit and extend the experiments of Goodfellow et al. (2015), who showed that - for then state-of-the-art networks - "the objective function has a simple, approximately convex shape" along the linear path between initialization and the trained weights. We do not find this to be the case for modern networks on CIFAR-10 and ImageNet. Instead, although loss is roughly monotonically non-increasing along this path, it remains high until close to the optimum. In addition, training quickly becomes linearly separated from the optimum by loss barriers. We conclude that, although Goodfellow et al.'s findings describe the "relatively easy to optimize" MNIST setting, behavior is qualitatively different in modern settings.
3 Replies