Generalization Error Matters in Decentralized Learning Under Byzantine Attacks

Published: 01 Jan 2025, Last Modified: 14 May 2025IEEE Trans. Signal Process. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, decentralized learning has emerged as a popular peer-to-peer signal and information processing paradigm that enables model training across geographically distributed agents in a scalable manner, without the presence of any central server. When some of the agents are malicious (also termed as Byzantine), resilient decentralized learning algorithms are able to limit the impact of these Byzantine agents without knowing their number and identities, and have guaranteed optimization errors. However, analysis of the generalization errors, which are critical to implementations of the trained models, is still lacking. In this paper, we provide the first analysis of the generalization errors for a class of popular Byzantine-resilient decentralized stochastic gradient descent (DSGD) algorithms. Our theoretical results reveal that the presence of Byzantine agents introduces additional error terms in the generalization error bounds, which are independent on the number of training samples. Numerical experiments are conducted to confirm our theoretical results.
Loading