A Note on "Assessing Generalization of SGD via Disagreement"

Published: 06 Nov 2022, Last Modified: 30 Jun 2023Accepted by TMLREveryoneRevisionsBibTeX
Authors that are also TMLR Expert Reviewers: ~Andreas_Kirsch1
Abstract: Several recent works find empirically that the average test error of deep neural networks can be estimated via the prediction disagreement of models, which does not require labels. In particular, Jiang et al. (2022) show for the disagreement between two separately trained networks that this `Generalization Disagreement Equality' follows from the well-calibrated nature of deep ensembles under the notion of a proposed `class-aggregated calibration.' In this reproduction, we show that the suggested theory might be impractical because a deep ensemble's calibration can deteriorate as prediction disagreement increases, which is precisely when the coupling of test error and disagreement is of interest, while labels are needed to estimate the calibration on new datasets. Further, we simplify the theoretical statements and proofs, showing them to be straightforward within a probabilistic context, unlike the original hypothesis space view employed by Jiang et al. (2022).
Certifications: Expert Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/BlackHC/2202.01851
Assigned Action Editor: ~Laurent_Dinh1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 340
Loading