Multilingual Refusal Alignment for Safer Large Language Models

ACL ARR 2026 January Submission3740 Authors

04 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multilingualism, less-resourced languages, multilingual benchmarks, multilingual evaluation, multilingual safety, alignment
Abstract: As Large Language Models (LLMs) are deployed globally, ensuring their safety and alignment across multiple languages becomes paramount. However, safety behaviors often vary unpredictably between languages, posing significant challenges for consistent and ethical AI. In this work, we systematically investigate the dynamics of multilingual alignment, exploring whether single-language alignment transfers cross-lingually, how language consistency is preserved during training, and the resulting trade-offs with general knowledge capabilities. We introduce RefusEU a novel refusal alignment dataset covering 12 European languages, including a dedicated test set for evaluating current state-of-the-art models. Our controlled Direct Preference Optimization (DPO) experiments provide two key insights: aligning models exclusively in English is insufficient to ensure cross-lingual safety, even for the same harm categories, whereas training on multilingual datasets can improve safety without degrading general performance, as measured by the Global MMLU benchmark.
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: multilingualism, less-resourced languages, multilingual benchmarks, multilingual evaluation, multilingual safety, alignment
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Data resources, Data analysis
Languages Studied: polish, english, czech, slovak,slovenian,lithuanian,latvian, german,italian, french, spanish, portuguese
Submission Number: 3740
Loading