Keywords: Federated Learning, Federated Semi-Supervised Learning, Open-set Semi-Supervised Learning
TL;DR: This paper introduces Federated Open-Set Semi-Supervised Learning (FOSSL) under labels-at-server, identifies its challenges, and proposes a global-pivot–centric framework that achieves stable gains in ID accuracy and OOD detection.
Abstract: We study Federated Open-Set Semi-Supervised Learning (FOSSL) under a labels-at-server regime, where the server holds a small labeled set of in-distribution (ID) classes while clients contribute only unlabeled, non-IID data that may include unknown classes. This setting is practically important yet under-explored and poses distinctive challenges: pseudo-label brittleness and intensified heterogeneity from diverse out-of-distribution (OOD) categories. We propose OpenFL, a server-guided framework that stabilizes training and exploits only reliable ID signals. The server maintains a round-wise EMA (R-EMA) model to smooth round-to-round drift, uses EMA-derived global pivots to anchor representation learning, and aggregates clients by reliability-aware weights (alignment quality) rather than data size. Clients apply dual-gated pivot alignment, attracting only high-confidence ID samples, while uncertain/OOD samples receive a mild angular repulsion from all pivots via the normalization term. Across CIFAR-10, CIFAR-100, and FashionMNIST with diverse inlier/outlier splits and unseen OOD tests, OpenFL consistently improves both ID accuracy and OOD detection (AUROC) and remains stable where federated adaptations of strong SSL/OSSL baselines become unstable. This work establishes labels-at-server FOSSL as a benchmark problem and provides a principled solution framework.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 18227
Loading