Visualizing the Loss Landscape of Neural NetsDownload PDF

15 Feb 2018 (modified: 21 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Neural network training relies on our ability to find ````````"good" minimizers of highly non-convex loss functions. It is well known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effect on the underlying loss landscape, is not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple ``"filter normalization" method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture effects the loss landscape, and how training parameters affect the shape of minimizers.
TL;DR: We explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods.
Keywords: visualization, loss surface, flatness, sharpness
Code: [![github](/images/github_icon.svg) tomgoldstein/loss-landscape](https://github.com/tomgoldstein/loss-landscape) + [![Papers with Code](/images/pwc_icon.svg) 10 community implementations](https://paperswithcode.com/paper/?openreview=HkmaTz-0W)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/arxiv:1712.09913/code)
Data: [CIFAR-100](https://paperswithcode.com/dataset/cifar-100), [ImageNet](https://paperswithcode.com/dataset/imagenet)
9 Replies

Loading