Keywords: Convex optimization, Quasi-Self-Concordant Optimization
TL;DR: We give a practical algorithm for optimizing over-parametrized quasi-self-concordant functions to high precision via $\ell_{\infty}$ Lewis weights
Abstract: In this paper, we study the problem $\min_{x\in R^{d},Nx=v}\sum\_{i=1}^{n}f((Ax-b)_{i})$
for a quasi-self-concordant function $f: R\to R$, where $A,N$ are
$n\times d$ and $m\times d$ matrices, $b,v$ are vectors of length
$n$ and $m$ with $n\ge d.$ We show an algorithm based on a trust-region
method with an oracle that can be implemented using $\widetilde{O}(d^{1/3})$
linear system solves, improving the $\widetilde{O}(n^{1/3})$ oracle
by [Adil-Bullins-Sachdeva, NeurIPS 2021]. Our implementation of
the oracle relies on solving the overdetermined $\ell\_{\infty}$-regression
problem $\min\_{x\in R^{d},Nx=v}\|Ax-b\|\_{\infty}$. We provide an
algorithm that finds a $(1+\epsilon)$-approximate solution to this
problem using $O((d^{1/3}/\epsilon+1/\epsilon^{2})\log(n/\epsilon))$
linear system solves. This algorithm leverages $\ell\_{\infty}$ Lewis
weight overestimates and achieves this iteration complexity via a
simple lightweight IRLS approach, inspired by the work of [Ene-Vladu,
ICML 2019]. Experimentally, we demonstrate that our algorithm significantly
improves the runtime of the standard CVX solver.
Supplementary Material: zip
Primary Area: Optimization (e.g., convex and non-convex, stochastic, robust)
Submission Number: 20297
Loading