FedCav: Contribution-aware Model Aggregation on Distributed Heterogeneous Data in Federated LearningOpen Website

14 Apr 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: The emerging federated learning (FL) paradigm allows multiple distributed devices to cooperatively train models in parallel with the raw data retained locally. The local-computed parameters will be transferred to a centralized server for aggregation. However, the vanilla aggregation method ignores the heterogeneity of the distributed data, which may lead to slow convergence and low training efficiency. Yet, existing data scheduling and improved aggregation methods either incur privacy concerns or fail to consider the fine-grained heterogeneity. We propose FedCav, a contribution-aware model aggregation algorithm that differentiates the merit of local updates and explicitly favors the model-informed contributions. The intuition is that the local data showing higher inference loss is likely to facilitate better performance improvement. To this end, we design a novel global loss function with explicit optimization preference on informative local updates, theoretically prove its convex property, and use it to regulate the gradient descent process iteratively. Additionally, we propose to identify abnormal updates with fake loss by auditing historic local training statistics. The results of extensive experiments demonstrate that FedCav needs fewer training rounds (~34%) for convergence and achieves better inference accuracy (~2.4%) than the baselines (i.e., FedAvg and FedProx). We also observe that FedCav can actively mitigate the model replacement attacks with agile recovery capability towards the aggregation.
0 Replies

Loading