Keywords: AI Psychological Counselor, Benchmark, Multi-Therapy, Multi-Session, Evaluation
Abstract: To develop a reliable AI for psychological assessment, we introduce PsychEval, a multisession, multi-therapy, and highly realistic benchmark designed to address three key challenges: 1) Can we train a highly realistic AI counselor? Realistic counseling is a longitudinal task requiring sustained memory and dynamic goal tracking. We propose a multisession benchmark (spanning 6-10 sessions across three distinct stages) that demands critical capabilities such as memory continuity, adaptive reasoning, and longitudinal planning. The dataset is annotated with extensive professional skills, comprising over 677 meta-skills and 4577 atomic skills. 2) How to train a multi-therapy AI counselor? While existing models often focus on a single therapy, complex cases frequently require flexible strategies among various therapies. We construct a diverse dataset covering five therapeutic modalities alongside an integrative therapy with a unified three-stage clinical framework across six core psychological topics. 3) How to systematically evaluate an AI counselor? We establish a holistic evaluation framework with 18 therapy-specific and therapy-shared metrics across Client-Level and Counselor-Level dimensions. To We also construct over 2,000 diverse client profiles. Extensive experimental analysis fully validates the superior quality and clinical fidelity of our dataset. Our datasets and evaluation framework are anonymously available at this repository.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking, automatic evaluation of datasets, metrics, automatic creation and evaluation of language resources
Contribution Types: Data resources
Languages Studied: cn, en
Submission Number: 9138
Loading