Priroagg: Achieving robust model aggregation with minimum privacy leakage for federated learning
Abstract: Federated learning (FL), as a promising machine learning paradigm for large-scale distributed data, faces two security challenges of privacy and robustness: the transmitted model updates potentially leak sensitive user information, and the lack of central control over local model updates leaves the global model susceptible to malicious attacks. Current solutions attempting to address both problems under the one-server FL setting fall short in the following aspects: 1) design for simple validity checks that are insufficient against advanced attacks (e.g., checking norm of individual update); and 2) have partial privacy leakage for more complicated robust aggregation algorithms (e.g., distances between model updates are leaked for multi-Krum). In this work, we formalize a novel security notion of aggregated privacy that characterizes the minimum amount of user information, in the form of aggregated statistics of users’ updates, that is necessary to be revealed to accomplish more advanced robust aggregation. We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy. As concrete instantiations of PriRoAgg, we construct two secure and robust protocols based on state-of-the-art robust algorithms, for which we provide full theoretical analyses on security and complexity. Extensive experiments are conducted for these protocols, demonstrating their robustness against various model integrity attacks, and their efficiency advantages over baselines.
Loading