On the Parallel Complexity of Multilevel Monte Carlo in Stochastic Gradient Descent

Published: 26 Oct 2023, Last Modified: 13 Dec 2023NeurIPS 2023 Workshop PosterEveryoneRevisionsBibTeX
Keywords: multilevel Monte Carlo method, neural differential equations, neural stochastic differential equations, stochastic gradient descent
TL;DR: An improved multilevel monte carlo gradient estimator for SGD on a massively parallel computer
Abstract: In the stochastic gradient descent (SGD) for sequential simulations such as the neural stochastic differential equations, the Multilevel Monte Carlo (MLMC) method is known to offer better theoretical computational complexity compared to the naive Monte Carlo approach. However, in practice, MLMC scales poorly on massively parallel computing platforms such as modern GPUs, because of its large parallel complexity which is equivalent to that of the naive Monte Carlo method. To cope with this issue, we propose the delayed MLMC gradient estimator that drastically reduces the parallel complexity of MLMC by recycling previously computed gradient components from earlier steps. The proposed estimator provably reduces the average parallel complexity per iteration at the cost of a slightly worse per-iteration convergence rate. In our numerical experiments, we employ an example of deep hedging to demonstrate the superior parallel complexity of our method compared to the standard MLMC in SGD.
Submission Number: 31
Loading