Keywords: LLM, active model selection, uncertainty quantification, information gathering
TL;DR: We propose LLM Selector, an active model selection framework that efficiently identifies the best large language model under limited annotation budgets using preference-based judgments and information-theoretic query selection.
Abstract: We introduce LLM SELECTOR, the first framework for active model selection of Large Language Models (LLMs). Unlike prior work on evaluation and benchmarking, LLM SELECTOR addresses the challenge of selecting the best LLM for a given task under limited annotation budgets. In particular, LLM SELECTOR adaptively identifies a small set of informative queries to annotate in order to efficiently select the best LLM for the given task. To further reduce annotation cost, we leverage a judge-based annotation model using an oracle. Through extensive experiments, we show that LLM SELECTOR reduces annotation costs by up to 58.33% for identifying the best model, and by 62.50% when selecting a near-best LLM that is within close vicinity of best model.
Submission Number: 106
Loading