All models are wrong, some are useful: Model Selection with Limited Labels

Published: 22 Jan 2025, Last Modified: 07 Feb 2025AISTATS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We introduce Model Selector, a framework for label-efficient selection of pretrained classifiers. Given a pool of unlabeled target data, Model Selector samples a small subset of highly informative examples for labeling, in order to efficiently identify the best pretrained model for deployment on this target dataset. Through extensive experiments, we demonstrate that Model Selector drastically reduces the need for labeled data while consistently picking the best or near-best performing model. Across 18 model collections on 16 different datasets, comprising over 1,500 pretrained models, Model Selector reduces the labeling cost by up to 94.15% to identify the best model compared to the cost of the strongest baseline. Our results further highlight the robustness of Model Selector in model selection, as it reduces the labeling cost by up to 72.41% when selecting a near-best model, whose accuracy is only within 1% of the best model.
Submission Number: 685
Loading