Personalized Federated Learning with Spurious Features: An Adversarial Approach

Published: 11 Mar 2024, Last Modified: 11 Mar 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: One of the common approaches for personalizing federated learning is fine-tuning the global model for each local client. While this addresses some issues of statistical heterogeneity, we find that such personalization methods are vulnerable to spurious features at local agents, leading to reduced generalization performance. This work considers a setup where spurious features correlate with the label in each client's training environment, and the mixture of multiple training environments (i.e., the global environment) diminishes the spurious correlations. In other words, while the global federated learning model trained over the global environment suffers less from spurious features, the local fine-tuning step may lead to personalized models vulnerable to spurious correlations. In light of this practical and pressing challenge, we propose a novel strategy to mitigate the effect of spurious features during personalization by maintaining the adversarial transferability between the global and personalized models. Empirical results on object and action recognition tasks show that our proposed approach bounds personalized models from further exploiting spurious features while preserving the benefit of enhanced accuracy from fine-tuning.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Antti_Honkela1
Submission Number: 1537