TL;DR: We propose a cost-effective approach to tuning LLM judges hyperperameters and analyze their performance across various dimensions, including scaling and hyperparameters.
Abstract: Evaluating Large Language Models (LLMs) often requires costly human annotations. To address this, LLM-based judges have been proposed, which compare the outputs of two LLMs enabling the ranking of models without human intervention. While several approaches have been proposed, many confounding factors are present between different papers. For instance the model, the prompt and other hyperparameters are typically changed at the same time making apple-to-apple comparisons challenging.
In this paper, we propose to systematically analyze and tune the hyperparameters of LLM judges. To alleviate the high cost of evaluating a judge, we propose to leverage multi-objective multi-fidelity which allows to find judges that trades accuracy for cost and also reduce significantly the cost of the search. Our method identifies judges that not only outperform existing benchmarks in accuracy and cost-efficiency but also utilize open-weight models, ensuring greater accessibility and reproducibility.
Lay Summary: Comparing different AI language models requires human experts to evaluate their responses—a costly and slow process. A cheaper alternative consists in using AI models themselves as judges to compare other AI systems. Think of it as having one AI referee determine which of two AI players performed better at a task.
However, previous research has been inconsistent, like comparing apples to oranges. Different studies used different AI judges, instructions, and settings all at once, making it impossible to know what actually works best.
This paper shows how to tune systematically different AI judge design decisions. We propose a method that finds judges offering the best balance between accuracy and cost—identifying which AI judges are both reliable and affordable to run.
In particular, we find AI judges that outperform existing methods while using publicly available models, which we hope can help to make research using and based on AI judge more open.
Link To Code: https://github.com/geoalgo/judgetuning
Primary Area: Deep Learning->Large Language Models
Keywords: LLM judge, LLM evaluation, hyperparameter optimization, multi-fidelity, multi-objective optimization
Submission Number: 12726
Loading