Diversification is All You Need: Towards Data Efficient Image UnderstandingDownload PDF

Jul 20, 2020 (edited Feb 03, 2021)ECCV 2020 Workshop VIPriors Blind SubmissionReaders: Everyone
  • Keywords: Image Classification, Semantic Segmentation, Data Augmentation, Augmix, Mixup, seg-Augmix, Frequency weighted model ensemble, Test Time Augmentation
  • TL;DR: This paper proposed data diversification, test diversification and model diversification to achieve competitive performance on the VIPrior Challenges.
  • Abstract: One of the issues one comes across while dealing with image understanding problems, such as image classification and semantic segmentation, is the lack of enough number of labeled images in the training set which often results in overfitting. To deal with this issue, we proposed to diversify the models, the data and the test samples to gain the competitive performance. Specifically, for image classification, we adopt a two-stage framework to train the network. At the first stage, we train several different models independently from each other while varying the backbone architecture and input modality by combining two SOTA data augmentation techniques Augmix and Mixup. At the second stage, we do ensemble classification, in which we combine the set of trained models to classify unseen image rather than just a single one. In the experiments on the subset of Imagenet dataset, our method consistently improves accuracy from the baseline for the test samples in the dataset. For semantic segmentation, we proposed the seg-Augmix that extended the Augmix algorithm to the semantic segmentation task. In addition, the frequency weighting model ensemble method is also applied to improve the performance when combining different models. Using the proposed method we are able to achieve competitive performance on the Semantic Segmentation track and Image classification track of the VIPriors 2020 challenge respectively.
3 Replies