Keywords: Federated Learning, Anchors, Privacy Protection
Abstract: Federated learning enables collaborative model training across distributed clients while preserving their data privacy. However, privacy leakage and data heterogeneity remain significant challenges in federated learning. On the one hand, privacy leakage arises when the exposed information about client models during the client-server communication is exploited to reconstruct sensitive data or misuse client models, compromising both data and model privacy. On the other hand, data heterogeneity limits the generalization capability of the global model on clients, leading to suboptimal performance. Current approaches face a dilemma that stringent privacy constraints degrade the model performance or incur substantial training overhead, while methods addressing data heterogeneity struggle to provide strong privacy guarantees. In this work, to alleviate this dilemma, we propose a novel and simple personalized federated learning method called Federated Anchor-Based LEarning (FABLE), which introduces private anchors during local training. Specifically, clients select private anchors from local datasets to perform an anchor-aware representation transformation, improving the adaptation of the model to local tasks. More importantly, those private anchors not only provide dual privacy protection of data and model privacy, but also avoid significantly computational/communicational overhead or performance sacrifice. Extensive experiments on benchmark datasets under various settings validate the effectiveness of the FABLE method in terms of the privacy protection and model performance.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 14818
Loading