Abstract: Federated learning (FL) is an emerging promising paradigm of privacy-preserving machine learning (ML). An important type of FL is cross-silo FL, which enables a moderate number of organizations to cooperatively train a shared model by keeping confidential data locally and aggregating gradients on a central parameter server. However, the central server may be vulnerable to malicious attacks or software failures in practice. To address this issue, in this paper, we propose $\mathtt{DegaFL} $, a novel decentralized gradient aggregation approach for cross-silo FL. $\mathtt{DegaFL} $ eliminates the central server by aggregating gradients on each participant, and maintains and synchronizes gradients of only the current training round. Besides, we propose $\mathtt{AdaAgg} $ to adaptively aggregate correct gradients from honest nodes and use HotStuff to ensure the consistency of the training round number and gradients among all nodes. Experimental results show that $\mathtt{DegaFL} $ defends against common threat models with minimal accuracy loss, and achieves up to $50\times$ reduction in storage overhead and up to $13\times$ reduction in network overhead, compared to state-of-the-art decentralized FL approaches.
Loading