Implicit Gradient RegularizationDownload PDF

28 Sept 2020, 15:50 (edited 11 Mar 2021)ICLR 2021 PosterReaders: Everyone
  • Keywords: implicit regularization, deep learning, deep learning theory, theoretical issues in deep learning, theory, regularization
  • Abstract: Gradient descent can be surprisingly good at optimizing deep neural networks without overfitting and without explicit regularization. We find that the discrete steps of gradient descent implicitly regularize models by penalizing gradient descent trajectories that have large loss gradients. We call this Implicit Gradient Regularization (IGR) and we use backward error analysis to calculate the size of this regularization. We confirm empirically that implicit gradient regularization biases gradient descent toward flat minima, where test errors are small and solutions are robust to noisy parameter perturbations. Furthermore, we demonstrate that the implicit gradient regularization term can be used as an explicit regularizer, allowing us to control this gradient regularization directly. More broadly, our work indicates that backward error analysis is a useful theoretical approach to the perennial question of how learning rate, model size, and parameter regularization interact to determine the properties of overparameterized models optimized with gradient descent.
  • One-sentence Summary: We have found a hidden form of regularization in gradient descent - Implicit Gradient Regularization - that biases overparameterized models towards flat, low test error solutions and helps us to understand why deep learning works so well.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
20 Replies