ASPEST: Bridging the Gap Between Active Learning and Selective Prediction

Published: 10 Mar 2023, Last Modified: 28 Apr 2023ICLR 2023 Workshop DG PosterEveryoneRevisions
Keywords: selective prediction, active learning, distribution shift
TL;DR: Propose a new learning paradigm called active selective prediction and a novel method ASPEST for this new learning paradigm.
Abstract: Selective prediction aims to learn a reliable model that abstains from making predictions when the model uncertainty is high. These predictions can then be deferred to a humans for further evaluation. In many real-world scenarios, however, the distribution of test data is different from the training data, resulting in more inaccurate predictions, necessitating increased human labeling. Active learning circumvents this by only querying the most informative examples and, in several cases, has been shown to lower the overall labeling effort. We bridge the gap between selective prediction and active learning, proposing a new learning paradigm called *active selective prediction* which learns to query more informative samples from the shifted target domain while increasing accuracy and coverage. We propose a simple but effective solution, ASPEST, that trains ensembles of model snapshots using self-training with their aggregated outputs as pseudo labels. Extensive experiments demonstrate that active selective prediction can significantly outperform prior work on selective prediction and active learning and achieves more optimal utilization of humans in the loop.
Submission Number: 2
Loading