ASPEST: Bridging the Gap Between Active Learning and Selective Prediction

Published: 28 Feb 2024, Last Modified: 28 Feb 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain. These predictions can then be deferred to humans for further evaluation. As an everlasting challenge for machine learning, in many real-world scenarios, the distribution of test data is different from the training data. This results in more inaccurate predictions, and often increased dependence on humans, which can be difficult and expensive. Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples. Selective prediction and active learning have been approached from different angles, with the connection between them missing. In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain while increasing accuracy and coverage. For this new paradigm, we propose a simple yet effective approach, ASPEST, that utilizes ensembles of model snapshots with self-training with their aggregated outputs as pseudo labels. Extensive experiments on numerous image, text and structured datasets, which suffer from domain shifts, demonstrate that ASPEST can significantly outperform prior work on selective prediction and active learning (e.g. on the MNIST$\to$SVHN benchmark with the labeling budget of 100, ASPEST improves the AUACC metric from 79.36% to 88.84%) and achieves more optimal utilization of humans in the loop.
Submission Length: Regular submission (no more than 12 pages of main content)
Video: https://drive.google.com/file/d/1mEZ1O_b6PKGBR2Tw3VTOpzUCDN6IiiZH/view?usp=sharing
Code: https://github.com/google-research/google-research/tree/master/active_selective_prediction
Supplementary Material: zip
Assigned Action Editor: ~Masha_Itkina1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1823
Loading