When Can We Trust LLMs in Mental Health? Large-Scale Benchmarks for Reliable LLM Evaluation

18 Sept 2025 (modified: 12 Feb 2026)ICLR 2026 Conference Desk Rejected SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mental health, LLM Response, Dataset and benchmarks, Human evaluation, LLM as a judge, Agreement
Abstract: Evaluating Large Language Models (LLMs) for mental health support poses unique challenges due to the emotionally sensitive and cognitively complex nature of therapeutic dialogue. Existing benchmarks are limited in scale, authenticity, and reliability, often relying on synthetic or social media data. To address this gap, we introduce two complementary benchmarks that together provide a framework for generation and evaluation in this domain. MentalBench-100k consolidates 10,000 authentic single-session therapeutic conversations from three real-world datasets, each paired with nine LLM-generated responses, yielding 100,000 response pairs for assessing cognitive and affective trade-offs in response generation. MentalAlign-70k reframes evaluation by comparing four high-performing LLM judges with human experts across 70,000 ratings on seven attributes, grouped into Cognitive Support Score (CSS) and Affective Resonance Score (ARS). We introduce the Affective–Cognitive Agreement Framework, a statistical methodology using intraclass correlation coefficients (ICC) with bootstrap confidence intervals and bias analysis, to quantify both agreement magnitude and precision. Our analysis reveals systematic inflation by LLM judges, strong reliability for cognitive attributes such as guidance and informativeness, reduced precision for affective dimensions like empathy, and persistent unreliability in safety and relevance. These findings highlight when LLMs as a judge evaluation can be trusted and where human oversight remains essential. Together, our contributions establish new methodological and empirical foundations for reliable, large-scale evaluation of LLMs in mental health contexts.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 13878
Loading