On the Generalization of Models Trained with SGD: Information-Theoretic Bounds and ImplicationsDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 PosterReaders: Everyone
Keywords: deep learning, generalization, information theory, learning bound, regularization
Abstract: This paper follows up on a recent work of Neu et al. (2021) and presents some new information-theoretic upper bounds for the generalization error of machine learning models, such as neural networks, trained with SGD. We apply these bounds to analyzing the generalization behaviour of linear and two-layer ReLU networks. Experimental study of these bounds provide some insights on the SGD training of neural networks. They also point to a new and simple regularization scheme which we show performs comparably to the current state of the art.
One-sentence Summary: We derived new information-theoretic generalization bounds for SGD and we also proposed a new regularization scheme.
Supplementary Material: zip
13 Replies

Loading