Stochastic Optimization under Hidden Convexity

Published: 26 Oct 2023, Last Modified: 13 Dec 2023NeurIPS 2023 Workshop PosterEveryoneRevisionsBibTeX
Keywords: hidden convexity, stochastic optimization, global convergence
TL;DR: We analyze projected SGD and sub gradient methods under hidden convexity assumption.
Abstract: In this work, we consider stochastic non-convex constrained optimization problems under hidden convexity, i.e., those that admit a convex reformulation via a black box (non-linear, but invertible) map $c: \mathcal{X} \rightarrow \mathcal{U}$. A number of non-convex problems ranging from optimal control, revenue and inventory management, to convex reinforcement learning all admit such a hidden convex structure. Unfortunately, in the majority of considered applications, the map $c(\cdot)$ is unavailable and therefore, the reduction to solving a convex optimization is not possible. On the other hand, the (stochastic) gradients with respect to the original variable $x\in \mathcal{X}$ are often easy to obtain. Motivated by these observations, we consider the projected stochastic (sub-) gradient methods under hidden convexity and provide the first sample complexity guarantees for global convergence in smooth and non-smooth settings. Additionally, we improve our results to the last iterate function value convergence in the smooth setting using the momentum variant of projected stochastic gradient descent.
Submission Number: 30
Loading