A Robust Training Method for Federated Learning with Partial Participation

14 Apr 2025 (modified: 29 Oct 2025)Submitted to NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Partial participation, Stochastic optimization, Convex optimization, Non-convex optimization
Abstract: Client weighting and partial participation are key techniques in federated learning. They reduce communication costs and maintain a balance in the data used for model training. Numerous strategies are well-established within the research community, leading to growing interest in developing a unified theory. In this paper, we explore this issue in detail. We propose a method that accumulates unused gradients from the current iteration locally and, after full aggregation, leverages them for effective training. Our framework supports a wide class of weighting and sampling heuristics. Furthermore, we show the proposed approach to be robust against clients' periodic disconnection. To validate it, we conduct a series of numerical experiments involving the training of convolutional and transformer-based architectures.
Supplementary Material: zip
Primary Area: Optimization (e.g., convex and non-convex, stochastic, robust)
Submission Number: 2351
Loading