Learned Model Composition With Critical Sample Look-Ahead for Semi-Supervised Learning on Small Sets of Labeled Samples

Abstract: In this work, we propose to push the performance limit of semi-supervised learning on very small sets of labeled samples by developing a new method called <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">learned model composition with critical sample look-ahead</i> (LMCS). Training efficient deep neural networks on much smaller sets of labeled samples is a challenging problem. With a small labeled set, the initial network suffers from low accuracy. Based on this error-prone network, the subsequent semi-supervised learning process will be fragile and unstable. To address this issue, we propose to introduce a look-ahead master model to identify the correct direction of model evolution to effectively guide the semi-supervised learning process of the student model. Specifically, our proposed LMCS method explores two major ideas. First, it introduces a new learned model composition structure so that we can compose a more efficient master network from student models of past iterations through a network learning process. Second, we develop a new method, called <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">confined maximum entropy search</i> , to discover new critical samples near the model decision boundary and provide the master model with look-ahead access to these samples to enhance its guidance capability. Our extensive experimental results demonstrate that the proposed LMCS method outperforms the state-of-the-art semi-supervised learning methods, especially on small sets of labeled samples. For example, on the CIFAR-10 dataset, with a very small set of 80 labeled samples, our method outperforms Google’s MixMatch method, reducing the error rate by more than 10%.
0 Replies
Loading