Abstract: The selection of stepsizes has always been an elusive task in distributed optimization and learning. Although some stepsize-automation approaches have been proposed in centralized optimization, these approaches are inapplicable in the distributed setting. This is because in distributed optimization/learning, letting individual agents adapt their own stepsizes unavoidably results in stepsize heterogeneity, which can easily lead to algorithmic divergence. To solve this issue, we propose an approach that enables agents to adapt their individual stepsizes without any manual adjustments or global knowledge of the objective function. To the best of our knowledge, this is the first algorithm to successfully automate stepsize selection in distributed optimization/learning. Its performance is validated using several machine learning applications, including logistic regression, matrix factorization, and image classification.
External IDs:dblp:conf/cdc/ChenW24
Loading