The “Knowledge–Behavior Gap” in Cultural Taboo Safety of Large Language Models

ACL ARR 2026 January Submission6620 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Culture Taboo, Safety, Benchmark
Abstract: Cultural taboo safety is essential for deploying large language models (LLMs), as culturally insensitive outputs may cause offense or even social harm. However, existing cultural benchmarks primarily assess cultural knowledge or values biases, while overlooking whether LLMs can recognize and respect cultural taboos, especially when taboos are implicitly hidden in seemingly harmless questions. Besides, cultural taboos are implicit, and context-dependent, thus poss unique challenges for reliable evaluation. To address these gaps, we introduce **CulShield**, the first public benchmark dedicated to evaluating and improving the cultural taboo safety of LLMs. CulShield spans 77 countries and regions, and includes over 2,020 taboos. It evaluates models along both explicit knowledge and implicit behaviors.Experiments on several advanced LLMs (e.g., GPT-4o-mini, Gemini-2.5-pro) reveal a clear "knowledge-behavior gap": models often fail to apply known taboos during interaction. We further show that variations in linguistic context can significantly affect LLMs' cultural taboo safety. Code and data is accessible here: https://anonymous.4open.science/r/CulShield-7A0E.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: multilingual corpora, benchmarking, automatic creation and evaluation of language resources
Languages Studied: English, Chinese, Spanish
Submission Number: 6620
Loading