Delayed Momentum Aggregation: Communication-efficient Byzantine-robust Federated Learning with Partial Participation
Keywords: Federated Learning, Byzantine-robust Optimization, Communication-efficient Distributed Training
Abstract: Federated Learning (FL) allows distributed model training across multiple clients while preserving data privacy, but it remains vulnerable to Byzantine clients that exhibit malicious behavior.
While existing Byzantine-robust FL methods provide strong convergence guarantees (e.g., to a stationary point in expectation) under Byzantine attacks, they typically assume full client participation, which is unrealistic due to communication constraints and client availability.
Under partial participation, existing methods fail immediately after the sampled clients contain a Byzantine majority, creating a fundamental challenge for sparse communication.
First, we introduce \emph{delayed momentum aggregation}, a novel principle where the server aggregates the most recently received momentum from non-participating clients alongside fresh momentum from active clients.
Our optimizer \emph{D-Byz-SGDM} (Delayed Byzantine-robust SGD with Momentum) implements this delayed momentum aggregation principle for Byzantine-robust FL with partial participation.
Then, we establish convergence guarantees that recover previous full participation results and match the fundamental lower bounds we prove for the partial participation setting.
Experiments on deep learning tasks validated our theoretical findings, showing stable and robust training under various Byzantine attacks.
Submission Number: 126
Loading