am-ELO: A Stable Framework for Arena-based LLM Evaluation

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Arena-based evaluation is a fundamental yet significant evaluation paradigm for modern AI models, especially large language models (LLMs). Existing framework based on ELO rating system suffers from the inevitable instability problem due to ranking inconsistency and the lack of attention to the varying abilities of annotators. In this paper, we introduce a novel stable arena framework to address these issues by enhancing the ELO Rating System. Specifically, we replace the iterative update method with a Maximum Likelihood Estimation (MLE) approach, m-ELO, and provide theoretical proof of the consistency and stability of the MLE approach for model ranking. Additionally, we proposed the am-ELO, which modify the Elo Rating’s probability function to incorporate annotator abilities, enabling the simultaneous estimation of model scores and annotator reliability. Experiments demonstrate that this method ensures stability, proving that this framework offers a more robust, accurate, and stable evaluation method for LLMs.
Lay Summary: Current arena-based LLM evaluation frameworks using the ELO rating system suffer from instability due to ranking inconsistencies caused by data order sensitivity and neglect of annotator ability variations, which undermines evaluation credibility. To solve this problem, we propose am-ELO, a stable framework enhancing ELO by replacing iterative updates for consistent rankings and modifying the probability function to model annotator abilities. Experiments show our method reduces ELO score inconsistency to 30% compared to traditional methods, improves prediction accuracy, and robustly handles perturbed annotations.
Primary Area: General Machine Learning->Evaluation
Keywords: Large Language Models, Evaluation, Chatbot Arena, ELO Rating System
Submission Number: 4626
Loading