Dropping Just a Handful of Preferences Can Change Top Large Language Model Rankings

ICLR 2026 Conference Submission14320 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Preference-based Evaluations, Robustness to Data Dropping, Bradley--Terry Model, Influence Functions
TL;DR: We present a method for auditing the robustness of LLM ranking systems to worst-case data-dropping; we find that dropping just 0.003% of human preferences can change the top-ranked model on Chatbot Arena.
Abstract: We propose a method for evaluating the robustness of widely used LLM ranking systems---variants of a Bradley--Terry model---to dropping a worst-case very small fraction of preference data. Our approach is computationally fast and easy to adopt. When we apply our method to matchups from popular LLM ranking platforms, including Chatbot Arena and derivatives, we find that the rankings of top-performing models can be remarkably sensitive to the removal of a small fraction of preferences; for instance, dropping just 0.003% of human preferences can change the top-ranked model on Chatbot Arena. Our robustness check identifies the specific preferences most responsible for such ranking flips, allowing for inspection of these influential preferences. We observe that the rankings derived from MT-bench preferences are notably more robust than those from Chatbot Arena, likely due to MT-bench's use of expert annotators and carefully constructed prompts. Finally, we find that neither rankings based on crowdsourced human evaluations nor those based on LLM-as-a-judge preferences are systematically more sensitive than the other.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 14320
Loading