On The Resilience Of Online Federated Learning To Model Poisoning Attacks Through Partial Sharing

Published: 01 Jan 2024, Last Modified: 05 Aug 2025ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We investigate the robustness of the recently introduced partialsharing online federated learning (PSO-Fed) algorithm against model-poisoning attacks. To this end, we analyze the performance of the PSO-Fed algorithm in the presence of Byzantine clients, who may clandestinely corrupt their local models with additive noise before sharing them with the server. PSO-Fed can operate on streaming data and reduce the communication load by allowing each client to exchange parts of its model with the server. Our analysis, considering a linear regression task, reveals that the convergence of PSO-Fed can be ensured in the mean sense, even when confronted with model-poisoning attacks. Our extensive numerical results support our claim and demonstrate that PSO-Fed can mitigate Byzantine attacks more effectively compared with its state-of-the-art competitors. Our simulation results also reveal that, when model-poisoning attacks are present, there exists a non-trivial optimal stepsize for PSO-Fed that minimizes its steady-state mean-square error.
Loading