LlaMADRS: Prompting Large Language Models for Interview-Based Depression Assessment

ACL ARR 2024 December Submission2022 Authors

16 Dec 2024 (modified: 05 Feb 2025)ACL ARR 2024 December SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This study introduces \textsc{LlaMADRS}, a novel framework leveraging open-source Large Language Models (LLMs) to automate depression severity assessment using the \textbf{Montgomery-\AA{}sberg Depression Rating Scale} (MADRS). We employ a zero-shot prompting strategy with carefully designed cues to guide the model in interpreting and scoring transcribed clinical interviews. Our approach, tested on 236 real-world interviews from the \textbf{Context-Adaptive Multimodal Informatics} (CAMI) dataset, demonstrates strong correlations with clinician assessments. The Qwen 2.5--72b model achieves near-human level agreement across most MADRS items, with Intraclass Correlation Coefficients (ICC) closely approaching those between human raters. We provide a comprehensive analysis of model performance across different MADRS items, highlighting strengths and current limitations. Our findings suggest that LLMs, with appropriate prompting, can serve as efficient tools for mental health assessment, potentially increasing accessibility in resource-limited settings. However, challenges remain, particularly in assessing symptoms that rely on non-verbal cues, underscoring the need for multimodal approaches in future work.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Generation, NLP Applications, Computational Social Science, Clinical NLP, Mental Health Assessment, Depression Detection, Prompting Language Models
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 2022
Loading