Position: Beyond Assistance – Reimagining LLMs as Ethical and Adaptive Co-Creators in Mental Health Care
Abstract: This position paper argues for a fundamental shift in how Large Language Models (LLMs) are integrated into the mental health care domain. We advocate for their role as co-creators rather than mere assistive tools. While LLMs have the potential to enhance accessibility, personalization, and crisis intervention, their adoption remains limited due to concerns about bias, evaluation, over-reliance, dehumanization, and regulatory uncertainties. To address these challenges, we propose two structured pathways: SAFE-i (Supportive, Adaptive, Fair, and Ethical Implementation) Guidelines for ethical and responsible deployment, and HAAS-e (Human-AI Alignment and Safety Evaluation) Framework for multidimensional, human-centered assessment. SAFE-i provides a blueprint for data governance, adaptive model engineering, and real-world integration, ensuring LLMs align with clinical and ethical standards. HAAS-e introduces evaluation metrics that go beyond technical accuracy to measure trustworthiness, empathy, cultural sensitivity, and actionability. We call for the adoption of these structured approaches to establish a responsible and scalable model for LLM-driven mental health support, ensuring that AI complements—rather than replaces—human expertise.
Lay Summary: ### What if AI could be your teammate, not your replacement, in delivering compassionate mental health care?
As the digital-native generation turns to tools like ChatGPT for everything from schoolwork to career advice, it won’t be long before they rely on AI for emotional and mental health support. The question is no longer if LLMs belong in mental health, but how they can contribute safely, ethically, and meaningfully. This paper argues that LLMs are ready to do more than automate tasks when designed with ethical and safety considerations. These tools can help ease the burden on overstretched teams, provide personalized guidance, and offer timely support. But the stakes are high: without proper safeguards, LLMs can cause serious harm, spreading bias and misinformation, or leading users to place misplaced trust in their responses. Implementing strong safeguards is essential to ensure these tools are safe, reliable, and aligned with ethical standards.
To translate this vision into action, our position proposes two frameworks: SAFE-i, which supports responsible design and deployment through three pillars: Ethical Data Foundations, Model Engineering, and Real-World Integration. HAAS-e, which proposes a human-centered evaluation framework built around four essential dimensions based on trustworthiness, fairness, empathy, and actionability, introducing metrics like the Contextual Empathy Score (CES), Cultural Sensitivity Index (CSI), Personalization Appropriateness Score (PAS), and Actionability and Safety Assessment (ASA). Together, these tools offer a practical roadmap for aligning AI systems with human values, clinical goals, and diverse cultural contexts—empowering mental health professionals with adaptive, ethical, and empathetic AI collaborators.
Verify Author Names: My co-authors have confirmed that their names are spelled correctly both on OpenReview and in the camera-ready PDF. (If needed, please update ‘Preferred Name’ in OpenReview to match the PDF.)
No Additional Revisions: I understand that after the May 29 deadline, the camera-ready submission cannot be revised before the conference. I have verified with all authors that they approve of this version.
Pdf Appendices: My camera-ready PDF file contains both the main text (not exceeding the page limits) and all appendices that I wish to include. I understand that any other supplementary material (e.g., separate files previously uploaded to OpenReview) will not be visible in the PMLR proceedings.
Latest Style File: I have compiled the camera ready paper with the latest ICML2025 style files <https://media.icml.cc/Conferences/ICML2025/Styles/icml2025.zip> and the compiled PDF includes an unnumbered Impact Statement section.
Paper Verification Code: MjRhN
Permissions Form: pdf
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: large Language Models, Mental Health, SAFE Implementation, HAAS Evaluation, Complementary AI, Ethics, Human Expertise
Submission Number: 279
Loading