Keywords: Public Policy, Social Science, Benchmark
Abstract: Large Language Models (LLMs) are increasingly integrated into real-world decision-making, including in the domain of public policy. Yet, their ability to comprehend and reason about policy-related content remains underexplored. To fill this gap, we present PolicyBench, the first large-scale cross-system benchmark (US-China) evaluating policy comprehension, comprising 21K cases across a broad spectrum of policy areas, capturing the diversity and complexity of real-world governance. Following Bloom's taxonomy, the benchmark assesses three core capabilities: (1) Memorization: factual recall of policy knowledge, (2) Understanding: conceptual and contextual reasoning, and (3) Application: problem-solving in real-life policy scenarios. Building on this benchmark, we further propose PolicyMoE, a domain-specialized Mixture-of-Experts (MoE) model with expert modules aligned to each cognitive level. The proposed models demonstrate stronger performance on application-oriented policy tasks than on memorization or conceptual understanding, and yields the highest accuracy on structured reasoning tasks. Our results reveal key limitations of current LLMs in policy understanding and suggest paths toward more reliable, policy-focused models.
Paper Type: Long
Research Area: Computational Social Science, Cultural Analytics, and NLP for Social Good
Research Area Keywords: Public Policy, Social Science, Benchmark
Contribution Types: Data resources, Data analysis
Languages Studied: English, Chinese
Submission Number: 611
Loading