Abstract: We consider the problem of finding an approximate second-order stationary point of a
constrained non-convex optimization problem. We first show that, unlike the gradient descent method
for unconstrained optimization, the vanilla projected gradient descent algorithm may converge to a
strict saddle point even when there is only a single linear constraint. We then provide a hardness
result by showing that checking (εg,εH)-second order stationarity is NP-hard even in the presence
of linear constraints. Despite our hardness result, we identify instances of the problem for which
checking second order stationarity can be done efficiently. For such instances, we propose a dynamic
second order Frank–Wolfe algorithm which converges to (εg,εH)-second order stationary points in
O(max{ε−2, ε−3}) iterations. The proposed algorithm can be used in general constrained non-convex gH
optimization as long as the constrained quadratic sub-problem can be solved efficiently.
0 Replies
Loading