Exploring Binary Classification Hidden within Partial Label Learning

Published: 28 Jul 2022, Last Modified: 02 Oct 2025OpenReview Archive Direct UploadEveryoneCC BY-NC-ND 4.0
Abstract: Partial label learning (PLL) is to learn a discriminative model under incomplete supervision, where each instance is annotated with a candidate label set. The basic principle of PLL is that the unknown correct label y of an instance x resides in its candidate label set s, i.e., P(y ∈ s|x) = 1. On which basis, current researches either directly model P(y|x) under different data generation assumptions or propose various surrogate multiclass losses, which all aim to encourage the model-based Pθ(y ∈ s|x) → 1 implicitly. In this work, instead, we explicitly construct a binary classifcation task toward P(y∈s|x) based on the discriminative model, that is to predict whether the model-output label of x is one of its candidate labels. We formulate a novel risk estimator with estimation error bound for the proposed PLL binary classifcation risk. By applying logit adjustment based on disambiguation strategy, the practical approach directly maximizes Pθ(y ∈ s|x) while implicitly disambiguating the correct one from candidate labels simultaneously. Thorough experiments validate that the proposed approach achieves competitive performance against the state-of-the-art PLL methods.
Loading