Non-Uniform Sampling and Adaptive Optimizers in Deep Learning

Published: 26 Oct 2023, Last Modified: 13 Dec 2023NeurIPS 2023 Workshop PosterEveryoneRevisionsBibTeX
Keywords: Importance Sampling, Stochastic Gradient Descent, Deep Learning, Optimizers
Abstract: Stochastic gradient descent samples uniformly the training set to build an unbiased gradient estimate with a limited number of samples. However, at a given step of the training process, some data are more helpful than others to continue learning. Importance sampling for training deep neural networks has been widely studied to propose sampling schemes yielding better performance than the uniform sampling scheme. After recalling the theory of importance sampling for deep learning, this paper focuses on the interplay between the sampling scheme and the optimizer used. We show that the sampling proportional to the per-sample gradient norms is not optimal for adaptive optimizers, although it is the case for stochastic gradient descent in its standard form. This implies that new sampling schemes have to be designed with respect to the optimizer used. Thus, using approximations of the per-sample gradient norms scheme with adaptive optimizers is likely to yield unsatisfying results.
Submission Number: 20
Loading