SEA-SafeguardBench: Evaluating AI Safety in SEA Languages and Cultures

15 Sept 2025 (modified: 10 Dec 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Safety, Evaluation and benchmark, large language model, multilingual, cultural
TL;DR: We present SEA-SafeguardBench, a human-verified benchmark in 8 SEA languages, showing that state-of-the-art LLMs struggle with region-specific harms compared to English.
Abstract: Safeguard models help large language models (LLMs) detect and block harmful content, but most evaluations remain English-centric and overlook linguistic and cultural diversity. Existing multilingual safety benchmarks often rely on machine-translated English data, which fails to capture nuances in low-resource languages. Southeast Asian (SEA) languages are particularly underrepresented despite the region’s linguistic diversity and unique safety concerns, from culturally sensitive political speech to region-specific misinformation. Addressing these gaps requires benchmarks that are natively authored to reflect local norms and harm scenarios. We introduce SEA-SafeguardBench, the first human-verified safety benchmark for SEA, covering eight languages, 21,640 samples, across three subsets: general, in-the-wild, and content generation. The experimental results from our benchmark demonstrate that even state-of-the-art LLMs and guardrails are challenged by SEA cultural and harm scenarios and underperform when compared to English texts.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 5568
Loading