CURE: Cultural Understanding & Reasoning Evaluation - A Framework for “Thick” Culture Alignment Evaluation in LLMs

AAAI 2026 Workshop AIGOV Submission35 Authors

21 Oct 2025 (modified: 25 Nov 2025)AAAI 2026 Workshop AIGOV SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Cultural alignment, LLM evaluation, Cultural competence, Cross-cultural AI, Cultural reasoning, Explainability
TL;DR: CURE requires LLMs to explain cultural judgments across 145 countries, revealing models achieving 75–82% accuracy fail when justifying reasoning, proving current benchmarks overestimate cultural competence through pattern matching vs understanding.
Abstract: Large language models (LLMs) are increasingly deployed in culturally diverse environments, yet existing evaluations of cultural competence remain limited. Existing methods focus on de-contextualized correctness or forced-choice judgments, overlooking the need for cultural understanding and reasoning required for appropriate responses. To address this gap, we introduce a set of benchmarks that, instead of directly probing abstract norms or isolated statements, present models with realistic situational contexts that require culturally grounded reasoning. In addition to the standard Exact Match metric, we introduce four complementary metrics (Coverage, Specificity, Connotation, and Coherence) to capture different dimensions of model's response quality. Empirical analysis across frontier models reveals that thin evaluation systematically overestimates cultural competence and produces unstable assessments with high variance. In contrast, thick evaluation exposes differences in reasoning depth, reduces variance, and provides more stable, interpretable signals of cultural understanding.
Submission Number: 35
Loading