L-SVRG and L-Katyusha with Adaptive Sampling

Published: 18 Mar 2023, Last Modified: 18 Mar 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Stochastic gradient-based optimization methods, such as L-SVRG and its accelerated variant L-Katyusha (Kovalev et al., 2020), are widely used to train machine learning models. Theoretical and empirical performance of L-SVRG and L-Katyusha can be improved by sampling the observations from a non-uniform distribution Qian et al. (2021). However, to design a desired sampling distribution, Qian et al. (2021) rely on prior knowledge of smoothness constants that can be computationally intractable to obtain in practice when the dimension of the model parameter is high. We propose an adaptive sampling strategy for L-SVRG and L-Katyusha that learns the sampling distribution with little computational overhead, while allowing it to change with iterates, and at the same time does not require any prior knowledge on the problem parameters. We prove convergence guarantees for L-SVRG and L-Katyusha for convex objectives when the sampling distribution changes with iterates. These results show that even without prior information, the proposed adaptive sampling strategy matches, and in some cases even surpasses, the performance of the sampling scheme in Qian et al. (2021). Extensive simulations support our theory and the practical utility of the proposed sampling scheme on real data.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We have carefully revised the writting of the paper. Specifically, we put more emphasize on introducing the novelty. Besides, we also improve the presentation of the intuition behind algorithm design and empirical observations.
Assigned Action Editor: ~Lijun_Zhang1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 593
Loading