FedFA: Federated Learning with Feature Alignment for Heterogeneous DataDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Federated learning, feature alignment, data heterogeneity, heterogeneous label distribution, heterogeneous feature distribution
TL;DR: A federated learning framework with feature alignment is proposed to tackle the data heterogeneity problem, including label and feature distribution skews across clients, from a novel perspective of shared feature space by feature anchors.
Abstract: Federated learning allows multiple clients to collaboratively train a model without exchanging their data, thus preserving data privacy. Unfortunately, it suffers significant performance degradation under heterogeneous data at clients. Common solutions involve designing specific regularizers for local-model training or developing aggregation schemes for global-model aggregation. Nevertheless, we found that these methods fail to achieve the desired performance due to neglecting the importance of feature mapping consistency across client models. We first observe and analyze that, with heterogeneous data, a vicious cycle exists between classifier divergence and feature mapping inconsistency across clients, thereby shifting the aggregated global model from the expected optima. We then propose a simple yet effective framework named Federated learning with Feature Alignment (FedFA) to tackle the data heterogeneity problem from a novel perspective of shared feature space. A key insight of FedFA is introducing feature anchors to align the feature mappings and calibrate the classifier updates across clients during their local updates, such that client models are updated in a shared feature space. We prove that this modification brings a property of consistent classifier updates if features are class-discriminative. Extensive experiments show that FedFA significantly outperforms the state-of-the-art federated learning algorithms on various image classification datasets under both label and feature distribution skews.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
25 Replies

Loading