An Online Adaptive Sampling Algorithm for Stochastic Difference-of-convex Optimization with Time-varying Distributions
TL;DR: We propose an online adaptive DCA under time-varying distributions, with a novel pointwise convergence rate for the SAA of subdifferential mappings.
Abstract: We propose an online adaptive sampling algorithm for solving stochastic nonsmooth difference-of-convex (DC) problems under time-varying distributions. At each iteration, the algorithm relies solely on data generated from the current distribution and employs distinct adaptive sampling rates for the convex and concave components of the DC function, a novel design guided by our theoretical analysis. We show that, under proper conditions on the convergence of distributions, the algorithm converges subsequentially to DC critical points almost surely. Furthermore, the sample size requirement of our proposed algorithm matches the results achieved in the smooth case or when a measurable subgradient selector is available, both under static distributions. A key element of this analysis is the derivation of a novel $O(\sqrt{p/n})$ pointwise convergence rate (modulo logarithmic factors) for the sample average approximation of subdifferential mappings, where $p$ is the dimension of the variable and $n$ is the sample size -- a result of independent interest. Numerical experiments confirm that the proposed algorithm is both efficient and effective for addressing stochastic nonsmooth problems.
Lay Summary: Many real-world problems—from resource allocation to machine learning—can be described using a flexible mathematical structure called difference-of-convex (DC) functions, which represent complex goals as the difference between two simpler parts. While powerful, these problems become especially difficult when data arrives continuously and is uncertain, as in many online systems.
In this work, we introduce a new algorithm that adaptively tackles such problems in real time. The algorithm adjusts how much data to use for each part of the problem at every step, guided by our theoretical analysis. We also develop new mathematical tools to deal with the unique challenges caused by randomness and non-smoothness in these settings.
Our algorithm is not only reliable under changing data distributions but also matches the best performance known in easier cases. Experiments show it is both robust and efficient for solving demanding online optimization tasks.
Primary Area: Optimization->Stochastic
Keywords: nonsmooth, difference-of-convex, distribution shift, online optimization
Submission Number: 5080
Loading