Variational Stochastic Gradient Descent for Deep Neural Networks

Published: 18 Jun 2024, Last Modified: 09 Jul 2024WANT@ICML 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Optimization in DNNs, Stochastic Variational Inference, Probabilistic Inference, Stochastic Gradient Descent
TL;DR: We propose Variational Stochastic Gradient Descent (VSGD), an noval optimmization method that combines Stochastic Gradient Descent (SGD) with Bayesian Inference, by modelling SGD as a probabilistic graphical model.
Abstract: Optimizing deep neural networks (DNNs) is one of the main tasks in successful deep learning. Current state-of-the-art optimizers are adaptive gradient-based optimization methods such as Adam. Recently, there has been an increasing interest in formulating gradient-based optimizers in a probabilistic framework for better estimation of gradients and modeling uncertainties. Here, we propose to combine both approaches, resulting in the Variational Stochastic Gradient Descent (VSGD) optimizer. We model gradient updates as a probabilistic model and utilize stochastic variational inference (SVI) to derive an efficient and effective update rule. Further, we show how our VSGD method relates to other adaptive gradient-based optimizers like Adam. Lastly, we carry out experiments on two image classification datasets and three deep neural network architectures, where we show that VSGD converges faster and outperforms Adam and SGD.
Submission Number: 26
Loading