Keywords: Large Language Models, Bias and stereotype, Mental health
Abstract: Large Language Models (LLMs) can exhibit imbalanced biases against vulnerable groups, but how they rationalize stereotypes and rights restrictions targeting mental health entities remains underexplored. We audit a broad suite of open-weight LLMs on stereotype-justification prompts tied to mental health identities. We find that several widely used models endorse harmful stereotypes when explicitly asked to justify them, with endorsement varying across model families, versions, and mental health conditions. Finally, we show that widely used harmful-content evaluation and moderation frameworks often miss these nuanced, discriminatory responses, highlighting a gap in current AI safety evaluation for mental health groups.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Computational Social Science, Cultural Analytics, and NLP for Social Good
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 10203
Loading