On Improved Distributed Random Reshuffling over Networks

Published: 01 Jan 2024, Last Modified: 13 May 2025ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we consider a distributed optimization problem. A network of n agents, each with its own local loss function, aims to collaboratively minimize the global average loss. We prove improved convergence results for two recently proposed random reshuffling (RR) based algorithms, D-RR and GT-RR, for smooth strongly-convex and nonconvex problems, respectively. In particular, we prove an additional speedup with increasing n in both cases. Our experiments show that these methods can provide further communication savings by carrying multiple gradient steps between successive communications while also outperforming decentralized SGD. Our experiments also reveal a gap in the theoretical understanding of these methods in the nonconvex case.
Loading