FedRAC: Rolling Submodel Allocation for Collaborative Fairness in Federated Learning

18 Sept 2025 (modified: 13 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: federated Learning, collaborative fairness, privacy
Abstract: Collaborative fairness in federated learning ensures that clients are rewarded according to their contributions, thereby fostering long-term participation among clients. However, existing methods often under-reward low-contributing clients in the early training stage and neglect critical issues, such as consistency across local models (i.e., inter-model inconsistency) or unequal neuron training frequencies in the aggregated model (i.e., intra-model inconsistency), both of which lead to degraded performance. To address these issues, we propose FedRAC, a novel Federated learning framework employing Rolling submodel Allocation for Collaborative fairness, without compromising the global model performance. First, we design a dynamic reputation calculation module with a theoretical fairness guarantee to generate reputations matching clients’ contributions. It adjusts their reputations dynamically during training, ensuring low-contributing clients access better models in the early stages for adequate training. Second, we propose a rolling submodel allocation module that assigns high-performance submodels to clients with high reputations. This module prioritizes low-frequency neurons during allocation and is supported by theoretical convergence guarantees, ensuring that all neurons in the global model are fully trained. Extensive experiments are conducted on three public datasets to confirm the advantages of our method in terms of fairness and model accuracy.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 11260
Loading