Abstract: Partial multi-label learning (PML) models the scenario where each training sample is annotated with a candidate label set, among which only a subset corresponds to the ground-truth labels. Existing PML approaches generally promise that there are sufficient partial multi-label samples for training the predictor. Nevertheless, when dealing with new tasks, it is more common that we only have a few PML samples associated with those tasks at hand. Furthermore, existing few-shot learning solutions typically assume the labels of support (training) samples are noise-free; as a result, noisy labels concealed within the candidate labels may seriously misinform the meta-learner and thus lead to a compromised performance. We formalize this problem as new learning paradigm called few-shot partial multi-label learning (FsPML), which aims to induce a noise-robust multi-label classifier with limited PML samples related to the target task. To address this problem, we propose a method named FsPML via prototype rectification (FsPML-PR). Specifically, FsPML-PR first conducts adaptive distance metric learning with an embedding network on the tasks previously encountered. Next, it performs positive/negative prototype rectification and disambiguating labels using samples features and label correlations in the embedding space. A new sample can then be classified based on its distances to the positive and to the negative prototypes. Extensive experimental studies on benchmark datasets certificate that our proposed FsPML achieves superior performance across various settings.
0 Replies
Loading