Fed-ARPL: Adaptive and Reciprocal Prototype Learning for Semi-supervised Federated Learning

18 Sept 2025 (modified: 13 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Semi-supervised Learning, Prototype Learning
Abstract: Federated Semi-supervised Learning (FSSL) enables collaborative training by leveraging a small labeled dataset on a central server and vast unlabeled data across clients. However, existing frameworks are hampered by two challenges: an initial Cold-start phase, where strict pseudo-label filtering criteria impede the use of unlabeled data, and a subsequent Knowledge Bottleneck, where the model's performance is capped by the server's limited and potentially biased labeled data. To address these challenges, we propose Fed-ARPL, a novel Adaptive and Reciprocal Prototype Learning framework that implements a meticulously designed three-phase learning strategy.First, a Warm-up Phase employs an adaptive thresholding mechanism to resolve the Cold-start dilemma, dynamically adjusting the pseudo-label confidence to accelerate initial convergence and establish a stable feature space. Next, a Teacher-Guided Phase leverages the server's reliable prototypes to provide unified, one-way guidance, steering all clients toward a consistent and well-structured representation. Finally, to break the Knowledge Bottleneck, the framework culminates in a Student-Feedback Phase, establishing a reciprocal paradigm where high-performing clients contribute their refined local prototypes to enrich the global consensus.Comprehensive experiments validate the effectiveness of our Fed-ARPL framework, showcasing its state-of-the-art (SOTA) performance on several widely-recognized benchmark datasets.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 10229
Loading