Optimization using Parallel Gradient Evaluations on Multiple ParametersDownload PDF

Published: 23 Nov 2022, Last Modified: 07 Apr 2024OPT 2022 PosterReaders: Everyone
Keywords: convex optimization, gradient, distributed
TL;DR: We look at convex optimization problems and propose a method that uses gradients from multiple parameters in synergy to update these parameters together towards the optima.
Abstract: We propose a first-order method for convex optimization, where instead of being restricted to the gradient from a single parameter, gradients from multiple parameters can be used during each step of gradient descent. This setup is particularly useful when a few processors are available that can be used in parallel for optimization. Our method uses gradients from multiple parameters in synergy to update these parameters together towards the optima. While doing so, it is ensured that the computational and memory complexity is of the same order as that of gradient descent. Empirical results demonstrate that even using gradients from as low as \textit{two} parameters, our method can often obtain significant acceleration and provide robustness to hyper-parameter settings. We remark that the primary goal of this work is less theoretical, and is instead aimed at exploring the understudied case of using multiple gradients during each step of optimization.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2302.03161/code)
0 Replies

Loading