Benchmarking Overton Pluralism in LLMs

ICLR 2026 Conference Submission19385 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Pluralism, Overton pluralism, pluralistic alignment, benchmark
TL;DR: We introduce OvertonScore and the first benchmark for measuring pluralism in LLMs, combining a large-scale human study with an automated LLM-as-a-Judge framework.
Abstract: We introduce the first framework for measuring Overton pluralism in large language models—the extent to which diverse viewpoints are represented in model outputs. We (i) formalize Overton pluralism as a set-coverage metric (OvertonScore), (ii) conduct a large-scale U.S.-representative human study (N=300; 15 questions; 8 LLMs), and (iii) develop an automated benchmark that closely reproduces human judgments. On average, models achieve OvertonScores of 0.2 – 0.37, with OpenAI's o4-mini performing best; yet all models remain far below the theoretical maximum of 1.0, revealing substantial headroom for improvement. Because repeated large-scale human studies are costly and slow, scalable evaluation tools are essential for model development. Hence, we propose an automated benchmark that achieves high rank correlation with human judgments ($\rho=0.88$), providing a practical proxy while not replacing human assessment. By turning pluralistic alignment from a normative aim into a measurable benchmark, our work establishes a foundation for systematic progress toward more pluralistic LLMs.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 19385
Loading