Decentralized Stochastic Optimization with Client SamplingDownload PDF

02 Oct 2022, 17:24 (modified: 29 Nov 2022, 13:59)OPT 2022 PosterReaders: Everyone
Keywords: decentralized learning, partial client participation, optimization
TL;DR: Unified Convergence Analysis for Decentralized Stochastic Optimization with Client Sampling!
Abstract: Decentralized optimization is a key setting toward enabling data privacy and on-device learning over networks. Existing research primarily focuses on distributing the objective function across $n$ nodes/clients, lagging behind the real-world challenges such as i) node availability---not all $n$ nodes are always available during the optimization---and ii) slow information propagation (caused by a large number of nodes $n$). In this work, we study Decentralized Stochastic Gradient Descent (D-SGD) with node subsampling, i.e. when only $s~(s \leq n)$ nodes are randomly sampled out of $n$ nodes per iteration. We provide the theoretical convergence rates in smooth (convex and non-convex) problems with heterogeneous (non-identically distributed data) functions. Our theoretical results capture the effect of node subsampling and choice of the topology on the sampled nodes, through a metric termed \emph{the expected consensus rate}. On a number of common topologies, including ring and torus, we theoretically and empirically demonstrate the effectiveness of such a metric.
0 Replies