Why neural networks find simple solutions: The many regularizers of geometric complexityDownload PDF

Published: 31 Oct 2022, Last Modified: 23 Dec 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Deep Learning, Deep Learning Theory, Theory, Neural Networks, Regularization, Implicit Regularization, Smoothness, Complexity, Double-Descent
Abstract: In many contexts, simpler models are preferable to more complex models and the control of this model complexity is the goal for many methods in machine learning such as regularization, hyperparameter tuning and architecture design. In deep learning, it has been difficult to understand the underlying mechanisms of complexity control, since many traditional measures are not naturally suitable for deep neural networks. Here we develop the notion of geometric complexity, which is a measure of the variability of the model function, computed using a discrete Dirichlet energy. Using a combination of theoretical arguments and empirical results, we show that many common training heuristics such as parameter norm regularization, spectral norm regularization, flatness regularization, implicit gradient regularization, noise regularization and the choice of parameter initialization all act to control geometric complexity, providing a unifying framework in which to characterize the behavior of deep learning models.
Supplementary Material: pdf
15 Replies

Loading