Target-based Surrogates for Stochastic OptimizationDownload PDF

Published: 23 Nov 2022, Last Modified: 07 Apr 2024OPT 2022 PosterReaders: Everyone
Keywords: stochastic optimization, surrogate optimization, composite function
TL;DR: We consider minimizing functions for which it is computationally expensive to query the gradient using a surrogate function which can be more efficiently minimized.
Abstract: We consider minimizing functions for which it is expensive to compute the gradient. Such functions are prevalent in reinforcement learning, imitation learning and bilevel optimization. Our target optimization framework uses the (expensive) gradient computation to construct surrogate functions in a \emph{target space} (e.g. the logits output by a linear model for classification) that can be minimized efficiently. This allows for multiple parameter updates to the model, amortizing the cost of gradient computation. In the full-batch setting, we prove that our surrogate is a global upper-bound on the loss, and can be (locally) minimized using a black-box optimization algorithm. We prove that the resulting majorization-minimization algorithm ensures convergence to a stationary point of the loss. Next, we instantiate our framework in the stochastic setting and propose the $SSO$ algorithm that can be viewed as projected stochastic gradient descent in the target space. This connection enables us to use standard stochastic optimization algorithms to construct surrogates which can be minimized using deterministic optimization. Our experiments on supervised learning and imitation learning exhibit the benefits of target optimization, even in stochastic settings.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2302.02607/code)
0 Replies

Loading