UFL: Uncertainty-Driven Federated Learning

ICLR 2026 Conference Submission14584 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Privacy-Preserving, Uncertainty, Monte Carlo Dropout, Data Heterogeneity
Abstract: Federated Learning (FL), a privacy-preserving distributed machine learning, encounters numerous challenges in practical applications, notably Data Heterogeneity (DH). Current methods primarily address DH, relying on coarse dataset statistics, server aggregation, or local model uncertainty. This paper reveals that FL exhibits a distinct sample-level uncertainty distribution during training, characterized by a pronounced long-tail effect. We further show that this long-tail effect is not solely attributable to DH, but is also an inherent characteristic of the FL framework itself. To this end, we propose Uncertainty-driven Federated Learning (UFL), a framework designed to address the uncertainty challenge at the sample level. UFL employs Monte Carlo (MC) dropout to estimate sample uncertainty and adaptively re-weights the loss function accordingly. Moreover, we design U-Agg, a robust aggregation method using clients' accumulated high-uncertainty sample uncertainty to adjust aggregating weights and improve convergence with theoretical guarantees. Unlike existing approaches that alleviate DH at coarser levels, UFL introduces a sample-centric perspective that directly addresses the uncertainty challenge from its fundamental source, offering an orthogonal yet complementary dimension to traditional techniques. Extensive experiments demonstrate that UFL outperforms SOTA FL methods by mitigating the long-tail effect of sample uncertainty, offering a novel and complementary perspective on sample-level uncertainty to enhance FL efficacy over DH solutions.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 14584
Loading