Keywords: Personalized Federated Learning, Optimization, Generalization, Byzantine Robustness
TL;DR: Full collaboration can be suboptimal in the presence of heterogeneity and Byzantine adversaries, we shed light on when personalization can improve robustness.
Abstract: Federated learning (FL) is an appealing paradigm that allows a group of machines
(a.k.a. clients) to learn collectively while keeping their data local. However, due
to the heterogeneity between the clients’ data distributions, the model obtained
through the use of FL algorithms may perform poorly on some client’s data.
Personalization addresses this issue by enabling each client to have a different
model tailored to their own data while simultaneously benefiting from the other
clients’ data. We consider an FL setting where some clients can be adversarial, and
we derive conditions under which full collaboration fails. Specifically, we analyze
the generalization performance of an interpolated personalized FL framework in the
presence of adversarial clients, and we precisely characterize situations when full
collaboration performs strictly worse than fine-tuned personalization. Our analysis
determines how much we should scale down the level of collaboration, according
to data heterogeneity and the tolerable fraction of adversarial clients. We support
our findings with empirical results on mean estimation and binary classification
problems, considering synthetic and benchmark image classification datasets
Supplementary Material: zip
Primary Area: Other (please use sparingly, only use the keyword field for more details)
Submission Number: 19836
Loading