Keywords: Zeroth-order, stochastic optimization, nonconvex optimization
TL;DR: We propose two new stationary notions for stochastic nonconvex nonsmooth composite optimization, and obtain convergence rates of two zeroth-order algortihms to the two new stationary points respectively.
Abstract: This work aims to solve a stochastic nonconvex nonsmooth composite optimization problem. Previous works on composite optimization problem requires the differentiable part to satisfy Lipschitz smoothness or some relaxed smoothness conditions, which excludes some machine learning examples such as regularized ReLU network and sparse support matrix machine. In this work, we focus on stochastic nonconvex composite optimization problem without any smoothness assumptions. In particular, we propose two new notions of approximate stationary points for such optimization problem (one stronger than the other) and obtain finite-time convergence results of two zeroth-order algorithms to these two approximate stationary points respectively. Finally, we demonstrate that these algorithms are effective using numerical experiments.
Primary Area: optimization
Submission Number: 14698
Loading