Orthogonal Calibration for Asynchronous Federated Learning

ICLR 2026 Conference Submission14625 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Asynchronous federated learning, Heterogeneity, Orthogonalization
Abstract: Asynchronous federated learning mitigates the inefficiency of conventional synchronous protocols by integrating updates as they arrive. Due to asynchrony and data heterogeneity, learning objectives at the global and local levels are inherently inconsistent—global optimization trajectories may conflict with ongoing local updates. Existing asynchronous methods simply distribute the latest global weights to clients, which can overwrite local progress and cause model drift. In this paper, we propose OrthoFL, an orthogonal calibration framework that decouples global and local learning progress to reduce interference. In OrthoFL, clients and the server maintain separate model weights. Upon receiving an update, the server aggregates it into the global weights via a staleness-aware moving average. For client weights, OrthoFL computes the global weight shift during the client's delay and removes its projection onto the direction of the received update. The resulting parameters lie in a subspace orthogonal to the client update and preserve the maximal information from the global progress within the orthogonal hyperplane. The calibrated shift is then merged into the client weights for further training. Extensive experiments demonstrate OrthoFL improves accuracy by 9.6% and achieves a speed-up of 12× compared to synchronous methods. Moreover, it consistently outperforms state-of-the-art asynchronous baselines under various delay patterns and heterogeneity scenarios.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 14625
Loading