Distributed Optimization and Learning with Automated Stepsizes

Published: 01 Jan 2024, Last Modified: 25 Sept 2025CDC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The selection of stepsizes has always been an elusive task in distributed optimization and learning. Although some stepsize-automation approaches have been proposed in centralized optimization, these approaches are inapplicable in the distributed setting. This is because in distributed optimization/learning, letting individual agents adapt their own stepsizes unavoidably results in stepsize heterogeneity, which can easily lead to algorithmic divergence. To solve this issue, we propose an approach that enables agents to adapt their individual stepsizes without any manual adjustments or global knowledge of the objective function. To the best of our knowledge, this is the first algorithm to successfully automate stepsize selection in distributed optimization/learning. Its performance is validated using several machine learning applications, including logistic regression, matrix factorization, and image classification.
Loading