CommunityBench: Benchmarking Community-Level Alignment across Diverse Groups and Tasks

ACL ARR 2026 January Submission8212 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Community-level Alignment, Pluralistic Alignment, Benchmark, Individual Modeling
Abstract: Large language models (LLMs) alignment ensures model behaviors reflect human value. Existing alignment strategies primarily follow two paths: one assumes a universal value set for a unified goal (i.e., $\textit{one-size-fits-all}$), while the other treats every individual as unique to customize models (i.e., $\textit{individual-level}$). However, assuming a monolithic value space marginalizes minority norms, while tailoring individual models is prohibitively expensive. Recognizing that human society is organized into social clusters with high intra-group value alignment, we propose $\textbf{community-level alignment}$ as a "middle ground". Practically, we introduce $\textbf{CommunityBench}$, the first large-scale benchmark for community-level alignment evaluation, featuring four tasks grounded in Common Identity and Common Bond theory. With CommunityBench, we conduct a comprehensive evaluation of various foundation models on CommunityBench, revealing that current LLMs exhibit limited capacity to model community-specific preferences. Furthermore, we investigate the potential of community-level alignment in facilitating individual modeling, providing a promising direction for scalable and pluralistic alignment.
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: safety and alignment, human behavior analysis, automatic evaluation
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Submission Number: 8212
Loading