On the Variance of Neural Network Training with respect to Test Sets and Distributions

Published: 16 Jan 2024, Last Modified: 21 Apr 2024ICLR 2024 posterEveryoneRevisionsBibTeX
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: stochasticity, distribution shift, randomness, deep learning, variance, random seed
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Neural network trainings are nondeterministic; this paper presents and experimentally justifies a statistical model for the variation between independent runs of training.
Abstract: Neural network trainings are stochastic, causing the performance of trained networks to vary across repeated runs of training. We contribute the following results towards understanding this variation. (1) Despite having significant variance on their test-sets, we demonstrate that standard CIFAR-10 and ImageNet trainings have little variance in their performance on the test-distributions from which their test-sets are sampled. (2) We introduce the independent errors assumption and show that it suffices to recover the structure and variance of the empirical accuracy distribution across repeated runs of training. (3) We prove that test-set variance is unavoidable given the observation that ensembles of identically trained networks are calibrated (Jiang et al., 2021), and demonstrate that the variance of binary classification trainings closely follows a simple formula based on the error rate and number of test examples. (4) We conduct preliminary studies of data augmentation, learning rate, finetuning instability and distribution-shift through the lens of variance between runs.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Primary Area: representation learning for computer vision, audio, language, and other modalities
Submission Number: 8145
Loading