Privacy-Preserving Coded Schemes for Multi-Server Federated Learning With Straggling Links

Published: 01 Jan 2025, Last Modified: 20 May 2025IEEE Trans. Inf. Forensics Secur. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) has emerged as an unparalleled machine learning paradigm where multiple edge clients jointly train a global model without sharing the raw data. However, sharing local models or gradients still compromises clients’ privacy and could be susceptible to delivery failures due to unreliable communication links. To address these issues, this paper considers a multi-server FL where E edge clients wish to jointly train the global model with the help of H servers while guaranteeing data privacy and meanwhile combating $s\leq H$ unreliable links per client. We first propose a hybrid coding scheme based on repetition coding and MDS Coding, such that any $T_{s}$ colluding servers cannot deduce any client data besides the aggregated model, and any $T_{e}$ colluding clients remain unaware of honest clients’ data. Furthermore, we propose a Lagrange coding with mask (LCM) to ensure more stringent privacy protection that additionally demands that colluding servers possess no knowledge about either the local or global models. Furthermore, we establish lower bounds for both the uplink and downlink communication loads and theoretically prove that the hybrid scheme and LCM scheme can achieve the optimal uplink communication loads under the first and second threat models, respectively. For the second threat model with no straggling link, the LCM scheme is optimal. These demonstrate the communication efficiency, robustness, and privacy guarantee of our schemes.
Loading