Towards Multi-level Fairness and Robustness on Federated LearningDownload PDF

28 May 2022, 15:02 (modified: 21 Jul 2022, 01:30)SCIS 2022 PosterReaders: Everyone
Keywords: Federated Learning, Fairness, Robustness, Federated Optimization
TL;DR: Proposing a new problem of multi-level fairness and robustness on federated learning and an efficient federated optimization algorithm.
Abstract: Federated learning (FL) has emerged as an important machine learning paradigm where a global model is trained based on the private data from distributed clients. However, federated model can be biased due to the spurious correlation or distribution shift over subpopulations, and it may disproportionately advantage or disadvantage some of the subpopulations, leading to the problem of unfairness and non-robustness. In this paper, we formulate the problem of multi-level fairness and robustness on FL to train a global model performing well on existing clients, different subgroups formed by sensitive attribute(s), and newly added clients at the same time. To solve this problem, we propose a unifed optimization objective from the view of federated uncertainty set with theoretical analyses. We also develop an effcient federated optimization algorithm named Federated Mirror Descent Ascent with Momentum Acceleration (FMDA-M) with convergence guarantee. Extensive experimental results show that FMDA-M outperforms the existing FL algorithms on multilevel fairness and robustness.
Confirmation: Yes
0 Replies