Judge's Verdict: A Comprehensive Analysis of LLM Judge Capability Through Human Agreement

ICLR 2026 Conference Submission21214 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM-as-a-Judge, Judge's Verdict Benchmark, Human-AI Agreement, Cohen's Kappa Analysis, RAG Evaluation
Abstract: This research introduces the \textbf{Judge's Verdict Benchmark}, a novel two-step methodology to evaluate Large Language Models (LLMs) as judges for response accuracy evaluation tasks. We assess how well 54 LLMs can replicate human judgment when scoring responses from RAG (Retrieval-Augmented Generation) or Agentic pipelines against ground truth answers. Our methodology progresses from traditional correlation analysis to comprehensive Cohen's Kappa analysis that measures actual agreement patterns. The two-step approach includes: (1) a correlation test that filters judges with strong alignment ($r \geq 0.80$), followed by (2) a human-likeness test using z-scores to identify two distinct judgment patterns—\textit{human-like} judgment ($|z| < 1$) that mimics natural human variation, and \textit{super-consistent} judgment ($z > 1$) that exceeds typical human-to-human agreement levels. This methodology reveals that 27 out of 54 tested LLMs achieve Tier 1 performance: 23 models exhibit human-like patterns that preserve the nuances of human judgment, while 4 models demonstrate super-consistent behavior—a pattern that could indicate either enhanced reliability or oversimplification of complex judgments. Testing 43 open source models (1B-405B parameters) and 11 closed models (GPT, Gemini, Claude variants), we demonstrate that judge excellence is not solely dependent on model size but on specific training strategies. Our key contributions include: (1) establishing that correlation alone is insufficient for judge evaluation, (2) introducing a "Turing Test for judges" based on agreement patterns, and (3) providing a standardized benchmark for classifying LLM judges into distinct performance tiers for different evaluation needs.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 21214
Loading