Prompt-to-Leaderboard: Prompt-Adaptive LLM Evaluations

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language model (LLM) evaluations typically rely on aggregated metrics like accuracy or human preference, averaging across users and prompts. This averaging obscures user- and prompt-specific variations in model performance. To address this, we propose Prompt-to-Leaderboard (P2L), a method that produces leaderboards specific to a prompt or set of prompts. The core idea is to train an LLM taking natural language prompts as input to output a vector of Bradley-Terry coefficients which are then used to predict the human preference vote. The resulting prompt-dependent leaderboards allow for unsupervised task-specific evaluation, optimal routing of queries to models, personalization, and automated evaluation of model strengths and weaknesses. Data from Chatbot Arena suggest that P2L better captures the nuanced landscape of language model performance than the averaged leaderboard. Furthermore, our findings suggest that P2L's ability to produce prompt-specific evaluations follows a power law scaling similar to that observed in LLMs themselves. In January 2025, the router we trained based on this methodology achieved the \#1 spot on the Chatbot Arena leaderboard. Our code is available at this GitHub link: https://github.com/lmarena/p2l.
Lay Summary: Current LLM leaderboards, such as Chatbot Arena, rank models by their average performance across many tasks. However, these general rankings don't indicate which LLM is best for a specific need, for instance, writing code versus crafting a marketing slogan. Specifically, determining the best model for a singular prompt is a significant challenge. We introduce Prompt-to-Leaderboard (P2L) to address this issue. P2L is a deep learning model that takes your specific question or task (a "prompt") and instantly generates a custom leaderboard – an ordering of which LLMs are predicted to perform best for that particular prompt. It learns this by analyzing millions of human preferences from real-world comparisons and its performance scales with both increasing data as well as increasing parameter count. P2L not only allows you to choose the best model for your prompt, but it can also tell you which models to use given a cost constraint. It also enables personalized evaluations based on a user’s prompt history and provides automatic analysis of model strengths and weaknesses across different topics—offering a powerful toolkit for informed, adaptive LLM deployment.
Link To Code: https://github.com/lmarena/p2l
Primary Area: General Machine Learning->Evaluation
Keywords: llm, leaderboard, evaluations
Submission Number: 11964
Loading