Assessing Generalization of SGD via DisagreementDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SpotlightReaders: Everyone
Keywords: Generalization, Deep Learning, Empirical Phenomenon, Accuracy Estimation, Stochastic Gradient Descent
Abstract: We empirically show that the test error of deep networks can be estimated by training the same architecture on the same training set but with two different runs of Stochastic Gradient Descent (SGD), and then measuring the disagreement rate between the two networks on unlabeled test data. This builds on -- and is a stronger version of -- the observation in Nakkiran&Bansal 20, which requires the runs to be on separate training sets. We further theoretically show that this peculiar phenomenon arises from the well-calibrated nature of ensembles of SGD-trained models. This finding not only provides a simple empirical measure to directly predict the test error using unlabeled test data, but also establishes a new conceptual connection between generalization and calibration.
One-sentence Summary: We provide a surprisingly simple technique to accurately estimate the test error of deep neural networks using unlabeled data and we prove that this works because SGD ensembles are naturally well-calibrated.
20 Replies

Loading