BANDIT SAMPLING FOR FASTER NEURAL NETWORK TRAINING WITH SGDDownload PDF

01 Mar 2023 (modified: 31 May 2023)Submitted to Tiny Papers @ ICLR 2023Readers: Everyone
Keywords: importance sampling, neural network training
TL;DR: In this work we propose a new non-uniform sampling techinque for SGD for training neural networks based on Bandits.
Abstract: Importance sampling is a valuable technique in deep learning that involves sampling useful training examples more frequently to improve learning algorithms. However, obtaining reliable sample importance estimates early on in training can be challenging, as existing importance sampling methods can be computationally expensive and slow to converge. In this work, we propose a novel sampling schemed based on Multi-arm bandits (MAB). The proposed sampler is able to achieve higher validation accuracies significantly earlier in the training, compared to baselines.
7 Replies

Loading