Keywords: Pluralism, Overton pluralism, pluralistic alignment, benchmark
TL;DR: We introduce OvertonScore and the first benchmark for measuring pluralism in LLMs, combining a large-scale human study with an automated LLM-as-a-Judge framework.
Abstract: We introduce the first framework for measuring Overton pluralism in large language models--the extent to which diverse viewpoints are represented in model outputs. We (i) formalize Overton pluralism as a set-coverage metric (OvertonScore), (ii) conduct a large-scale U.S.-representative human study (N=100; 30 questions; 8 LLMs), and (iii) develop an automated benchmark that reproduces human judgments with high fidelity. Our findings show that while most models achieve comparable pluralism, Gemma 3-27B underperforms and GPT o4-mini achieves the highest OvertonScore. The automated benchmark replicates these human results and generalizes across unseen models, enabling scalable evaluation.
Submission Number: 235
Loading