Keywords: Federated learning, Partial label learning, Weakly supervised learning
TL;DR: This paper proposes the pFedPLL algorithm, which aims to address the Partial Label Learning (PLL) problem in a Federated Learning (FL) environment.
Abstract: Partial Label Learning (PLL) is known as a valuable learning technique that trains Machine Learning (ML) models on partial label datasets, where the ground truth label is concealed within the candidate label set of each data instance. It learns label correlation based on a single centralized dataset to predict the latent true label. When data is non-independent and identically distributed (non-i.i.d.) among workers in Federated Learning (FL), the label correlation interference problem occurs. To address the issue, in this paper, we propose pFedPLL, a personalized federated partial label learning algorithm with two new designs. In Label Correlation Isolation (LCI), we first develop a twin-module architecture, where a feature-level correlation matrix layer for each worker is isolated locally to prevent it from being interfered with by others. In Label Correlation Personalization (LCP), we then propose a bi-directional calibration loss to identify a more accurate learning direction, where the positive calibration aligns the prediction result with the latent true label, and the negative calibration pushes away the prediction result that falls into the non-candidate label set. We provide a convergence analysis of pFedPLL with a rate of $O\left(\sqrt{\frac{1}{T}}\right)$ for smooth non-convex problems. Experiment results demonstrate that pFedPLL outperforms SOTA federated PLL algorithms and the federated version of centralized PLL algorithms across nine datasets.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6107
Loading