Interactive Boosting of Neural Networks for Small-sample Image ClassificationDownload PDF

02 Jan 2018 (modified: 25 Jan 2018)ICLR 2018 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Neural networks have recently shown excellent performance on numerous classi- fication tasks. These networks often have a large number of parameters and thus require much data to train. When the number of training data points is small, however, a network with high flexibility will quickly overfit the training data, resulting in a large model variance and a poor generalization performance. To address this problem, we propose a new ensemble learning method called InterBoost for small-sample image classification. In the training phase, InterBoost first randomly generates two complementary datasets to train two base networks of the same structure, separately, and then next two complementary datasets for further training the networks are generated through interaction (or information sharing) between the two base networks trained previously. This interactive training process continues iteratively until a stop criterion is met. In the testing phase, the outputs of the two networks are combined to obtain one final score for classification. Detailed analysis of the method is provided for an in-depth understanding of its mechanism.
TL;DR: In the paper, we proposed an ensemble method called InterBoost for training neural networks for small-sample classification. The method has better generalization performance than other ensemble methods, and reduces variances significantly.
Keywords: ensemble learning, neural network, small-sample, overfitting, variance
8 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview