Stochastic Mirror Descent: Convergence Analysis and Adaptive Variants via the Mirror Stochastic Polyak Stepsize

Published: 08 Nov 2023, Last Modified: 08 Nov 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We investigate the convergence of stochastic mirror descent (SMD) under interpolation in relatively smooth and smooth convex optimization. In relatively smooth convex optimization we provide new convergence guarantees for SMD with a constant stepsize. For smooth convex optimization we propose a new adaptive stepsize scheme --- the mirror stochastic Polyak stepsize (mSPS). Notably, our convergence results in both settings do not make bounded gradient assumptions or bounded variance assumptions, and we show convergence to a neighborhood that vanishes under interpolation. Consequently, these results correspond to the first convergence guarantees under interpolation for the exponentiated gradient algorithm for fixed or adaptive stepsizes. mSPS generalizes the recently proposed stochastic Polyak stepsize (SPS) (Loizou et al. 2021) to mirror descent and remains both practical and efficient for modern machine learning applications while inheriting the benefits of mirror descent. We complement our results with experiments across various supervised learning tasks and different instances of SMD, demonstrating the effectiveness of mSPS.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: - Fixed names in references - Added more constant stepsize baselines - Added a mSPS baseline without smoothing and $c=1$ - Added link to open source implementation - Added reference to Chen and Teboulle for the three-point property - Added a small subsection on relative smoothness in the appendix
Code: https://github.com/IssamLaradji/mirror-sps
Supplementary Material: zip
Assigned Action Editor: ~Stephen_Becker1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1192
Loading