Understanding Generalization through VisualizationsDownload PDF

Published: 09 Dec 2020, Last Modified: 22 Oct 2023ICBINB 2020 SpotlightReaders: Everyone
Keywords: generalization, loss landscape, optimization landscape, minimizers, minima
TL;DR: Through extensive visualizations and measurements neural network loss landscapes, we provide convincing evidence that the "wide-margin prior" is one of the reasons why neural nets prefer minima that generalize well.
Abstract: The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive. Numerous rigorous attempts have been made to explain generalization, but available bounds are still quite loose, and analysis does not always lead to true understanding. The goal of this work is to make generalization more intuitive. Using visualization methods, we discuss the mystery of generalization, the geometry of loss landscapes, and how the curse (or, rather, the blessing) of dimensionality causes optimizers to settle into minima that generalize well.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1906.03291/code)
1 Reply

Loading