Meta Co-Training: Two Views are Better than One

Published: 27 Oct 2025, Last Modified: 28 Jan 2026ECAIEveryoneCC BY 4.0
Abstract: In many critical computer vision scenarios unlabeled data is plentiful, but labels are scarce and difficult to obtain. As a result, semi-supervised learning which leverages unlabeled data to boost the performance of supervised classifiers have received significant atten- tion in recent literature. One representative class of semi-supervised algorithms are co-training algorithms. Co-training algorithms lever- age two different models which have access to different independent and sufficient representations or “views” of the data to jointly make better predictions. Each of these models creates pseudo-labels on un- labeled points which are used to improve the other model. We show that in the common case where independent views are not available, we can construct such views inexpensively using pre-trained mod- els. Co-training on the constructed views yields a performance im- provement over any of the individual views we construct and perfor- mance comparable with recent approaches in semi-supervised learn- ing. We present Meta Co-Training, a novel semi-supervised learning algorithm, which has two advantages over co-training: (i) learning is more robust when there is large discrepancy between the infor- mation content of the different views, and (ii) does not require re- training from scratch on each iteration. Our method achieves new state-of-the-art performance on ImageNet-10% achieving a ∼ 4.7% reduction in error rate over prior work. Our method also outperforms prior semi-supervised work on several other fine-grained image clas- sification datasets.
Loading