FedReLa: Imbalanced Federated Learning via Re-Labeling

ICLR 2026 Conference Submission16382 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Imbalanced Learning, Long-tailed Learning, Data Heterogeneity
Abstract: Federated learning has emerged as the foremost approach for decentralized model training with privacy preserving. The global class imbalance and cross-client data heterogeneity naturally coexist, and the mismatch between local and global imbalances exacerbates the performance degradation of the aggregated model. The agnosticism of global minority classes poses significant challenges for data-level methods, especially under extreme conditions with severe class deficiencies across clients. In this paper, we propose FedReLa, a novel data-level approach that tackles the coexistence of data heterogeneity and class imbalance in federated learning. By re-labeling samples with a feature-dependent label re-allocator, FedReLa corrects the biased decision boundaries without requiring knowledge of the global class distribution. This modular, model-agnostic approach can be integrated with algorithmic methods to offer consistent improvements without any extra communication burden. Through extensive experiments, our method significantly improves the accuracy of minority classes and the overall accuracy on step-wise-imbalanced and long-tailed datasets, outperforming the previous state of the art.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 16382
Loading