Information-constrained optimization: can adaptive processing of gradients help?Download PDF

Published: 09 Nov 2021, Last Modified: 05 May 2023NeurIPS 2021 PosterReaders: Everyone
Keywords: optimization, convex optimization, adaptivity, information constraints, local privacy, communication constraints, lower bounds, random coordinate descent
TL;DR: We prove tight bounds on the convergence rate of first-order optimization when the gradients cannot be fully observed (information constraints), but the algorithm can still choose adaptively how to perform those limited observations.
Abstract: We revisit first-order optimization under local information constraints such as local privacy, gradient quantization, and computational constraints limiting access to a few coordinates of the gradient. In this setting, the optimization algorithm is not allowed to directly access the complete output of the gradient oracle, but only gets limited information about it subject to the local information constraints. We study the role of adaptivity in processing the gradient output to obtain this limited information from it, and obtain tight or nearly tight bounds for both convex and strongly convex optimization when adaptive gradient processing is allowed.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
9 Replies

Loading