Abstract: In recent years, large language models (LLMs) have demonstrated remarkable capabilities in comprehending and generating natural language content. An increasing number of services offer LLMs for various tasks via APIs. Different LLMs demonstrate expertise in different domains of queries (e.g., text classification queries). Meanwhile, LLMs of different scales, complexity, and performance are priced diversely. Driven by this, several researchers are investigating strategies for selecting an ensemble of LLMs, aiming to decrease overall usage costs while enhancing performance. However, to the best of our knowledge, none of the existing works addresses the problem, how to find an LLM ensemble subject to a cost budget, which maximizes the ensemble performance with guarantees. In this paper, we formalize the performance of an ensemble of models (LLMs) using the notion of prediction accuracy which we formally define. We develop an approach for aggregating responses from multiple LLMs to enhance ensemble performance. Building on this, we formulate the Optimal Ensemble Selection problem of selecting a set of LLMs subject to a cost budget that maximizes the overall prediction accuracy. We show that prediction accuracy is non-decreasing and non-submodular and provide evidence that the Optimal Ensemble Selection problem is likely to be NP-hard. By leveraging a submodular function that upper bounds prediction accuracy, we develop an algorithm called ThriftLLM and prove that it achieves an instance-dependent approximation guarantee with high probability. In addition, it achieves state-of-the-art performance for text classification and entity matching queries on multiple real-world datasets against various baselines in our extensive experimental evaluation, while using a relatively lower cost budget, strongly supporting the effectiveness and superiority of our method.
Loading