Abstract: Convolutional Neural Network (CNN) together with large-scale labeled data has achieved the state-of-the-art accuracy in various computer vision tasks. In real-world settings, however, the labels of large scale data can be noisy, which shall seriously degenerate the performance of CNN. In this work, we propose a self-error-correcting CNN (SECCNN) to deal with the noisy labels problem, by simultaneously correcting the improbable labels and optimizing the deep model. Specifically, the SEC-CNN provides an opportunity to correct a wrong label by developing a confidence policy to switch between the label of the sample and the max-activated output neuron of the CNN. Based on the assumption that the deep model is more and more accurate during the training, the confidence policy relies more on the given labels at the beginning stages, but tends to believe that the max-activated neuron of the learned network is reliable. SEC-CNN enables CNN learning to be effective even with 80% noisy labels. Extensive experimental results on MNIST, CIFAR-10, ImageNet and CCFD face dataset demonstrate the effectiveness of the proposed method in dealing with noisy labels.
0 Replies
Loading