Federated Learning via Consensus Mechanism on Heterogeneous Data: A New Perspective on Convergence

Published: 01 Jan 2024, Last Modified: 31 Jul 2025ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) on heterogeneous data (non-IID data) has recently received great attention. Most existing methods focus on studying the convergence guarantees for the global objective. While these methods can guarantee the decrease of the global objective in each communication round, they fail to ensure risk decrease for each client. In this paper, we propose FedCOME, which introduces a consensus mechanism aiming decreased risk for each client after each training round. In particular, we allow a slight adjustment to a client’s gradient on the server-side, producing an acute angle between the corrected and original gradients of participated clients. To generalize the consensus mechanism to the partial participation FL scenario, we devise a novel client sampling strategy to enhance the representativeness of the selected client subset to more accurately reflect the global population.. Training on these selected clients with the consensus mechanism could empirically lead to risk decrease for clients that are not selected. Finally, we conduct extensive experiments on four benchmark datasets to show the superiority of FedCOME against other state-of-the-art methods in terms of effectiveness, efficiency. For reproducibility, we make our source code publicly available at: https://github.com/fedcome/fedcome.
Loading