Keywords: Multilingual Safety
Abstract: The growing use of Large Language Models (LLMs) in healthcare and emotionally sensitive spaces raises critical concerns about safety, ethical alignment, and the risk of unintentional emotional harm. We present a structured, multilingual, and culturally grounded analysis of persuasive and anxiety-inducing language generated by LLMs during interactions with users in conditions of psychological or physical vulnerability. We introduce a two-phase interaction framework designed to simulate emotional escalation and to assess whether the model's responses amplify anxiety, reinforce false beliefs, or exhibit excessive diagnostic intrusiveness across six languages, using different LLM families within a unified experimental pipeline. Hence, we propose quantitative and qualitative metrics to capture anxiety amplification, catastrophic linguistic patterns, and diagnostic safety. The results highlight significant cross-lingual and cross-cultural disparities, underlining the importance of emotional alignment between models and users.
Paper Type: Long
Research Area: Safety and Alignment in LLMs
Research Area Keywords: LLMs Multilingual
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Reproduction study, Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: English, French, Italian, Russian, Spanish
Submission Number: 3856
Loading