Keywords: Federated Learning, Robustness, MPC, Privacy Preserving ML
Abstract: Federated averaging, the most popular aggregation approach in federated learning, is known to be vulnerable to failures and adversarial updates from clients that wish to disrupt training. While median aggregation remains one of the most popular alternatives to improve training robustness, the naive combination of median and secure multi-party computation (MPC) is unscalable. To this end, we propose an efficient approximate median aggregation with MPC privacy guarantees on the multi-silo setting, e.g., across hospitals, with two semi-honest non-colluding servers. The proposed method protects the confidentiality of client gradient updates against both semi-honest clients and servers. Asymptotically, the cost of our approach scales only linearly with the number of clients, whereas the naive MPC median scales quadratically. Moreover, we prove that the convergence of the proposed federated learning method is robust to a wide range of failures and attacks. Empirically, we show that our method inherits the robustness properties of the median while converging faster than the naive MPC median for even a small number of clients.
One-sentence Summary: Private and Robust Federated Learning that uses Approximate Median heuristic.
11 Replies
Loading