Abstract: Federated learning has emerged as a promising, massively distributed way to train a joint deep model over large amounts of edgedevices while keeping private user data strictly on device. In this work, motivated from ensuring fairness among users and robustness against malicious adversaries, we formulate federated learning as multi-objective optimization and propose a new algorithm <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">FedMGDA+</monospace> that is guaranteed to converge to Pareto stationary solutions. <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">FedMGDA+</monospace> is simple to implement, has fewer hyperparameters to tune, and refrains from sacrificing the performance of any <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">participating</i> user. We establish the convergence properties of <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">FedMGDA+</monospace> and point out its connections to existing approaches. Extensive experiments on a variety of datasets confirm that <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">FedMGDA+</monospace> compares favorably against state-of-the-art.
0 Replies
Loading