Selection via Proxy: Efficient Data Selection for Deep Learning

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • TL;DR: we can significantly improve the computational efficiency of data selection in deep learning by using a much smaller proxy model to perform data selection.
  • Abstract: Data selection methods, such as active learning and core-set selection, are useful tools for machine learning on large datasets, but they can be prohibitively expensive to apply in deep learning. Unlike in other areas of machine learning, the feature representations that these techniques depend on are learned in deep learning rather than given, requiring substantial training times. In this work, we show that we can greatly improve the computational efficiency of data selection in deep learning by using a small proxy model to perform data selection (e.g., selecting data points to label for active learning). By removing hidden layers from the target model or training for fewer epochs, we create proxies that are an order of magnitude faster to train. Although these small proxy models have higher error rates, we find that they empirically provide useful signal for data selection. We evaluate this “selection via proxy” (SVP) approach on several data selection tasks across five datasets: CIFAR10, CIFAR100, ImageNet, Amazon Review Polarity, and Amazon Review Full. For active learning, applying SVP can give an order of magnitude improvement in data selection runtime (i.e., the time it takes to repeatedly train and select points) without significantly increasing the final error. For core-set selection, proxies that are over 10x faster to train than their larger, more accurate target models can remove up to 50% of the data without harming the final accuracy of the target, making end-to-end training time savings possible.
  • Keywords: data selection, active-learning, core-set selection, deep learning, uncertainty sampling
0 Replies

Loading