Confirmation: I have read and agree with the IEEE BSN 2025 conference submission's policy on behalf of myself and my co-authors.
Keywords: Constitutional AI, Mental Health, LLMs, Computational Health, Alignment
TL;DR: This paper proposes and evaluates domain-specific Constitutional AI (CAI) for improving the safety of large language models (LLMs) in mental health applications, such as therapy chatbots and crisis detection tools.
Abstract: Mental health applications have emerged as a critical area in computational health, driven by rising global rates of mental illness, the integration of AI in psychological care, and the need for scalable solutions in underserved communities. These include therapy chatbots, crisis detection, and wellness platforms handling sensitive data, requiring specialized AI safety beyond general safeguards due to emotional vulnerability, risks like misdiagnosis or symptom exacerbation, and precise management of vulnerable states to avoid severe outcomes such as self-harm or loss of trust. Despite AI safety advances, general safeguards inadequately address mental health-specific challenges, including crisis intervention accuracy to avert escalations, therapeutic guideline adherence to prevent misinformation, scale limitations in resource-constrained settings, and adaptation to nuanced dialogues where generics may introduce biases or miss distress signals. We introduce an approach to apply Constitutional AI training with domain-specific mental health principles for safe, domain-adapted CAI systems in computational mental health applications.
Track: 12. Emerging Topics (e.g. Agentic AI, LLMs for computational health with wearables)
Tracked Changes: pdf
NominateReviewer: Chenhan Lyu, clyu4@ics.uci.edu
Yutong Song, yutons12@uci.edu
Submission Number: 144
Loading