Abstract: As an emerging weakly supervised learning framework, partial label learning considers inaccurate supervision where each training example is associated with multiple candidate labels among which only one is valid. In this article, a first attempt toward employing dimensionality reduction to help improve the generalization performance of partial label learning system is investigated. Specifically, the popular linear discriminant analysis (LDA) techniques are endowed with the ability of dealing with partial label training examples. To tackle the challenge of unknown ground-truth labeling information, a novel learning approach named Delin is proposed which alternates between LDA dimensionality reduction and candidate label disambiguation based on estimated labeling confidences over candidate labels. On one hand, the (kernelized) projection matrix of LDA is optimized by utilizing disambiguation-guided labeling confidences. On the other hand, the labeling confidences are disambiguated by resorting to kNN aggregation in the LDA-induced feature space. Extensive experiments over a broad range of partial label datasets clearly validate the effectiveness of Delin in improving the generalization performance of well-established partial label learning algorithms.
Loading