Interactive Boosting of Neural Networks for Small-sample Image Classification

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Neural networks have recently shown excellent performance on numerous classification tasks. These networks often have a large number of parameters and thus require much data to train. When the number of training data points is small, however, a network with high flexibility will quickly overfit the training data, resulting in a large model variance and a poor generalization performance. To address this problem, we propose a new ensemble learning method called InterBoost for small-sample image classification. In the training phase, InterBoost first randomly generates two complementary datasets to train two base networks of the same structure, separately, and then next two complementary datasets for further training the networks are generated through interaction (or information sharing) between the two base networks trained previously. This interactive training process continues iteratively until a stop criterion is met. In the testing phase, the outputs of the two networks are combined to obtain one final score for classification. Experimental results on UIUC-Sports (UIUC) and LabelMe (LM) datasets demonstrate that the proposed ensemble method outperforms existing ones. Moreover, the confusion matrices of the two base networks trained by our method are shown to be complementary. Detailed analysis of the method is provided for an in-depth understanding of its mechanism.
  • TL;DR: In the paper, we proposed an ensemble method called InterBoost for training neural networks for small-sample classification. The method has better generalization performance than other ensemble methods, and reduces variances significantly.
  • Keywords: ensemble learning, neural network, small-sample, overfitting, variance

Loading