Keywords: Distributed optimization, Performance estimation problem, local stespizes
Abstract: Distributed optimization is a core enabling technique for large-scale machine learning, multi-agent systems, and decentralized control, allowing both data and computation to be distributed across multiple agents. A key challenge in the design of distributed optimization algorithms lies in selecting appropriate step sizes. Most existing distributed algorithms rely on a coordinated global step size across the agents, which may be challenging to implement in a fully decentralized setting with many agents. Although some efforts have emerged trying to develop adaptive or uncoordinated step size strategies for distributed optimization, these approaches generally yield inferior performance compared with their coordinated counterparts, where a global universal step size is designed under the same step size strategy. In this work, we present a somewhat surprising finding that local step sizes for distributed optimization (with no coordination) can outperform their global step size counterparts. The results are obtained using a rigorous computer-assisted performance-characterizing technique (semidefinite programming) for optimization algorithms and are applicable to all convex and smooth objective functions. To the best of our knowledge, this is the first time that such results have been established for general objective functions in a rigorous and systematic manner. Experimental results using benchmark datasets confirm the theoretical discoveries.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 12419
Loading