Judging the Judges: A Systematic Study of Position Bias in LLM-as-a-Judge

ACL ARR 2025 July Submission307 Authors

27 Jul 2025 (modified: 31 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: LLM-as-a-Judge has emerged as a promising alternative to human evaluators across various tasks, yet inherent biases—particularly position bias, the tendency to favor solutions based on their position within the prompt—compromise its reliability. This exploratory study evaluates position bias in LLM judges across pairwise and list-wise comparison settings, introducing three metrics: repetition stability, position consistency, and preference fairness. Our experiments, involving 15 LLM judges across MTBench and DevBench with 22 tasks and approximately 40 solution-generating models, result in over 150,000 evaluation instances. We identify Judge-Level, Candidate-Level, and Task-Level factors contributing to bias. The findings confirm that position bias is not due to random chance and varies significantly across judges and tasks. While position bias is weakly influenced by the length of prompt components, it is strongly affected by the quality gap between solutions. Our agreement and disagreement analysis among judges further provides insights into the distribution of judging difficulty across the dataset, and highlights the potential for dataset modifications.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: LLM-as-a-Judge, Position Bias, LLM evaluations
Contribution Types: Data analysis
Languages Studied: English
Previous URL: https://openreview.net/forum?id=gYQoMtYBwY
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: Yes, I want a different area chair for our submission
Reassignment Request Reviewers: Yes, I want a different set of reviewers
Justification For Not Keeping Action Editor Or Reviewers: Reviewer Y6wc has consistently evaluated our paper based on the expectation that we propose new mitigation strategies for position bias, rather than developing an evaluation framework and providing empirical insights. This misunderstanding has led the reviewer to repeatedly criticize our work for not including debiasing techniques. However, our paper explicitly introduces an evaluation framework, presenting two novel metrics (Repetition Stability and Preference Fairness), extending the evaluation to list-wise comparison settings, and conducting systematic analyses at the Judge-, Candidate-, and Task-levels across multiple closed-source and open-source LLMs. We believe these contributions are significant, clearly stated, and valuable to the community.
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: N/A
B Use Or Create Scientific Artifacts: No
B1 Cite Creators Of Artifacts: N/A
B2 Discuss The License For Artifacts: N/A
B3 Artifact Use Consistent With Intended Use: N/A
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: N/A
B6 Statistics For Data: Yes
B6 Elaboration: Section 3
C Computational Experiments: Yes
C1 Model Size And Budget: N/A
C2 Experimental Setup And Hyperparameters: Yes
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 3 and 4
C4 Parameters For Packages: N/A
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: No
E1 Elaboration: We only used Ai Assistants to help refine the writings. We do not list it in the main paper.
Author Submission Checklist: yes
Submission Number: 307
Loading