Abstract: Highlights•We propose the novel MCL framework that allows unbiased risk estimation from samples with arbitrary number of complementary labels and arbitrary losses and models (linear and deep models), which shows the practicality of the proposed framework.•We further propose the MCUL framework to utilize the easily accessible unlabeled samples and validate the benefits of the incorporation of unlabeled samples experimentally. The rigorous convergence analysis on the statistical error bounds also shows the reliability of proposed frameworks.•The previous complementary-label learning framework and ordinary classification problems are proven to be special cases of the MCUL framework, which shows the comprehensiveness of the MCUL framework as a weakly-supervised leaning framework.•We further integrate the class-prior information into the risk estimator and experimentally show its efficiency.•We propose a adaptive risk correction scheme for alleviating over-fitting and show its consistency under mild assumptions. The experimental results show its validity on improving classification accuracy.
Loading