Neighborhood Gradient Clustering: An Efficient Decentralized Learning Method for Non-IID Data

Published: 19 Jun 2023, Last Modified: 21 Jul 2023FL-ICML 2023EveryoneRevisionsBibTeX
Keywords: Federated Learning, Decentralized Machine Learning, Peer-to-peer networks, Non-IID distribution, Heterogeneous data distribution, fully decentralized networks
TL;DR: Proposed a novel decentralized learning algorithm to improve the performance over non-IID data through manipulation of local-gradients
Abstract: Decentralized learning algorithms enable the training of deep learning models over large distributed datasets, without the need for a central server. In practical scenarios, the distributed datasets can have significantly different data distributions across the agents. In this paper, we propose Neighborhood Gradient Clustering (NGC), a novel decentralized learning algorithm to improve decentralized learning over non-IID data. Specifically, the proposed method replaces the local gradients of the model with the weighted mean of self-gradients, model-variant cross-gradients, and data-variant cross-gradients. Model-variant cross-gradients are derivatives of the received neighbors’ model parameters with respect to the local dataset - computed locally. Data-variant cross-gradients are derivatives of the local model with respect to its neighbors’ datasets - received through communication. We demonstrate the efficiency of \textit{NGC} over non-IID data sampled from various vision datasets. Our experiments demonstrate that the proposed method either remains competitive or outperforms (by up to 6%) the existing state-of-the-art (SoTA) with significantly less compute and memory requirements.
Submission Number: 7
Loading