When Safety Alignment Fails to Generalize: Probing with Language Game Jailbreaks

ACL ARR 2026 January Submission2180 Authors

02 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM safety, Alignment Generalization, Jailbreak Attacks, Language Games
Abstract: Large language models (LLMs) are widely deployed in real-world applications, yet their safety alignment often fails to generalize beyond the specific linguistic formats seen during training. Prior work has shown that mismatched generalization can lead to alignment failures, but these studies typically rely on fixed or narrow transformation schemes. In this work, we probe safety alignment generalization using language game jailbreaks, a class of linguistically structured transformations that alter surface form while preserving fluency and semantic recoverability. We further introduce custom language games, which parameterize and vary transformation rules, enabling controlled exploration of alignment behavior across closely related linguistic variants. To scale this analysis, we propose AutoLanJail, an automated framework for discovering and refining language game-based jailbreaks. Experiments across open-source and closed-source LLMs show that safety fine-tuning is highly format-specific: defenses trained on one linguistic form fail to generalize to even minimal variations. These findings reveal a structural limitation of current fine-tuning-based alignment methods and highlight the need for safety evaluations that account for systematic linguistic variation.
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: Language Modeling
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 2180
Loading