Probability maximization via Minkowski functionals: convex representations and tractable resolutionDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 12 May 2023Math. Program. 2023Readers: Everyone
Abstract: In this paper, we consider the maximizing of the probability $${\mathbb {P}}\left\{ \, \zeta \, \mid \, \zeta \, \in \, {\mathbf {K}}({\mathbf {x}}) \, \right\} $$ P ζ ∣ ζ ∈ K ( x ) over a closed and convex set $${\mathcal {X}}$$ X , a special case of the chance-constrained optimization problem. Suppose $${\mathbf {K}}({\mathbf {x}}) \, \triangleq \, \left\{ \, \zeta \, \in \, {\mathcal {K}}\, \mid \, c({\mathbf {x}},\zeta ) \, \ge \, 0 \right\} $$ K ( x ) ≜ ζ ∈ K ∣ c ( x , ζ ) ≥ 0 , and $$\zeta $$ ζ is uniformly distributed on a convex and compact set $${\mathcal {K}}$$ K and $$c({\mathbf {x}},\zeta )$$ c ( x , ζ ) is defined as either $$c({\mathbf {x}},\zeta )\, \triangleq \, 1-\left| \zeta ^T{\mathbf {x}}\right| ^m$$ c ( x , ζ ) ≜ 1 - ζ T x m where $$m\ge 0$$ m ≥ 0 (Setting A) or $$c({\mathbf {x}},\zeta ) \, \triangleq \, T{\mathbf {x}}\, - \, \zeta $$ c ( x , ζ ) ≜ T x - ζ (Setting B). We show that in either setting, by leveraging recent findings in the context of non-Gaussian integrals of positively homogenous functions, $${\mathbb {P}}\left\{ \,\zeta \, \mid \, \zeta \, \in \, {\mathbf {K}}({\mathbf {x}}) \, \right\} $$ P ζ ∣ ζ ∈ K ( x ) can be expressed as the expectation of a suitably defined continuous function $$F(\bullet ,\xi )$$ F ( ∙ , ξ ) with respect to an appropriately defined Gaussian density (or its variant), i.e. $${\mathbb {E}}_{{{\tilde{p}}}} \left[ \, F({\mathbf {x}},\xi )\, \right] $$ E p ~ F ( x , ξ ) . Aided by a recent observation in convex analysis, we then develop a convex representation of the original problem requiring the minimization of $$g\left( {\mathbb {E}}\left[ \, F(\bullet ,\xi )\, \right] \right) $$ g E F ( ∙ , ξ ) over $${\mathcal {X}}$$ X , where g is an appropriately defined smooth convex function. Traditional stochastic approximation schemes cannot contend with the minimization of $$g\left( {\mathbb {E}}\left[ F(\bullet ,\xi )\right] \right) $$ g E F ( ∙ , ξ ) over $$\mathcal X$$ X , since conditionally unbiased sampled gradients are unavailable. We then develop a regularized variance-reduced stochastic approximation (r-VRSA) scheme that obviates the need for such unbiasedness by combining iterative regularization with variance-reduction. Notably, (r-VRSA) is characterized by almost-sure convergence guarantees, a convergence rate of $$\mathcal {O}(1/k^{1/2-a})$$ O ( 1 / k 1 / 2 - a ) in expected sub-optimality where $$a > 0$$ a > 0 , and a sample complexity of $$\mathcal {O}(1/\epsilon ^{6+\delta })$$ O ( 1 / ϵ 6 + δ ) where $$\delta > 0$$ δ > 0 . To the best of our knowledge, this may be the first such scheme for probability maximization problems with convergence and rate guarantees. Preliminary numerics on a portfolio selection problem (Setting A) and a set-covering problem (Setting B) suggest that the scheme competes well with naive mini-batch SA schemes as well as integer programming approximation methods.
0 Replies

Loading