Keywords: Bayesian Deep Learning, Implicit Regularization, Variational Inference, Implicit Bias of SGD
TL;DR: We demonstrate theoretically and empirically that one can exploit the implicit bias of SGD for efficient Bayesian deep learning.
Abstract: Modern deep learning models generalize remarkably well in-distribution, despite being overparametrized and trained with little to no explicit regularization.
Instead, current theory credits implicit regularization imposed by the choice of architecture, hyperparameters and optimization procedure.
However, deploying deep learning models out-of-distribution, in sequential decision-making tasks, or in safety-critical domains, necessitates reliable uncertainty quantification, not just a point estimate.
The machinery of modern approximate inference --- Bayesian deep learning --- should answer the need for uncertainty quantification, but its effectiveness has been challenged by the associated computational burden and by difficulties in defining useful explicit inductive biases through priors.
Instead, in this work we demonstrate, both theoretically and empirically, how to regularize a variational deep network implicitly via the optimization procedure, just as for standard deep learning.
We fully characterize the inductive bias of (stochastic) gradient descent in the case of an overparametrized linear model as generalized variational inference and demonstrate the importance of the choice of parametrization.
Finally, we show empirically that our approach achieves strong in- and out-of-distribution performance without tuning of additional hyperparameters and with minimal time and memory overhead over standard deep learning.
Submission Number: 30
Loading