Overparameterization Implicitly Regularizes Input-Space SmoothnessDownload PDF

06 Oct 2022 (modified: 05 May 2023)INTERPOLATE at NeurIPS 2022Readers: Everyone
Keywords: lipschitz constant, smoothness, double descent, deep networks
Abstract: Existing bounds on the generalization error of deep networks assume some form of smooth or bounded dependence on the input variable and intermediate activations, falling short of investigating the mechanisms controlling such factors in practice. In this work, we present an empirical study of the Lipschitz constant of networks trained in practice, as the number of model parameters and training epochs vary. We present non-monotonic trends for the Lipschitz constant, strongly correlating with double descent for the test error. Our findings highlight a theoretical shortcoming in modeling input-space smoothness via uniform bounds.
0 Replies