FedARC: Adaptive Residual Compensation for Data and Model Heterogeneous Federated Learning

ICLR 2026 Conference Submission17795 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated learning, Distributed Machine Learning, Deep Learning Algorithms
Abstract: Federated learning (FL) enables multiple clients to collaboratively train models without sharing private data, but practical FL is hindered by both data heterogeneity and model heterogeneity. Existing Heterogeneous FL (HtFL) methods often suffer from inadequate representation alignment and limited knowledge transfer, especially under fine-grained distribution shifts, thus limiting both personalization and generalization. To address these challenges, we propose FedARC, a novel HtFL framework with Adaptive Residual Compensation. FedARC adaptively fuses local and global representations through a trainable projector and applies dynamic residual correction to mitigate feature-level distribution mismatches. Moreover, FedARC incorporates semantic anchor alignment to further reduce inter-client feature divergence, thereby stabilizing knowledge transfer and aggregation. We theoretically prove FedARC converges with a non-convex convergence rate $O(1/T)$. Extensive experiments on four public benchmarks demonstrate that FedARC consistently outperforms nine state-of-the-art HtFL baselines, achieving superior accuracy while maintaining efficient communication and computation.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 17795
Loading