Attributional Safety Failures in Large Language Models under Code-Mixed Perturbations

Published: 01 Jan 2025, Last Modified: 07 Oct 2025CoRR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent advancements in LLMs have raised significant safety concerns, particularly when dealing with code-mixed inputs and outputs. Our study systematically investigates the increased susceptibility of LLMs to produce unsafe outputs from code-mixed prompts compared to monolingual English prompts. Utilizing explainability methods, we dissect the internal attribution shifts causing model's harmful behaviors. In addition, we explore cultural dimensions by distinguishing between universally unsafe and culturally-specific unsafe queries. This paper presents novel experimental insights, clarifying the mechanisms driving this phenomenon.
Loading