A Homogenization Approach for Gradient-Dominated Stochastic Optimization

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: second-order algorithm, gradient dominance, reinforcement learing
TL;DR: This paper proposes a novel second-order algorithm for gradient-dominated stochastic optimization, which enjoys cheap iteration cost and matches the best-known sample complexity.
Abstract: Gradient dominance property is a condition weaker than strong convexity, yet sufficiently ensures global convergence even in non-convex optimization. This property finds wide applications in machine learning, reinforcement learning (RL), and operations management. In this paper, we propose the stochastic homogeneous second-order descent method (SHSODM) for stochastic functions enjoying gradient dominance property based on a recently proposed homogenization approach. Theoretically, we provide its sample complexity analysis, and further present an enhanced result by incorporating variance reduction techniques. Our findings show that SHSODM matches the best-known sample complexity achieved by other second-order methods for gradient-dominated stochastic optimization but without cubic regularization. Empirically, since the homogenization approach only relies on solving extremal eigenvector problem at each iteration instead of Newton-type system, our methods gain the advantage of cheaper computational cost and robustness in ill-conditioned problems. Numerical experiments on several RL tasks demonstrate the better performance of SHSODM compared to other off-the-shelf methods.
Supplementary Material: zip
List Of Authors: Jiyuan, Tan and Chenyu, Xue and Chuwen, Zhang and Qi, Deng and Dongdong, Ge and Yinyu, Ye
Latex Source Code: zip
Signed License Agreement: pdf
Submission Number: 632
Loading