Resilience Outcomes Benchmark: Toward an Outcome-Labeled Coping Strategy Dataset for Precision Mental Health
Additional Submission Instructions: For the camera-ready version, please include the author names and affiliations, funding disclosures, and acknowledgements.
Track: Track 2: Dataset Proposal Competition
Keywords: precision mental health, outcome-supervised learning, coping strategies, dose–response modeling, contextual ranking, resilience, models-to-data, benchmark dataset, PHQ-9, GAD-7, WHO-5
TL;DR: Outcome-labeled mental-health dataset (10k vignettes + 3–5k cohort) enabling contextual strategy ranking, dose–response, and 30/90-day forecasts via a models-to-data server.
Abstract: Most AI benchmarks still measure static competence—accuracy on fixed math, coding, and knowledge-recall tasks. But intelligence that matters in care is adaptive effectiveness: knowing which actions help which people, at what dose, and on what timeline. Mental health AI today lacks the foundational resource that transformed vision (ImageNet) and language (Common Crawl): outcome-labeled supervision. We propose the Resilience Outcomes Benchmark (ROB), a two-phase, openly shareable dataset that operationalizes outcome-supervised learning for recovery after major stressors (bereavement, divorce, job loss, illness). Phase 1 releases 10k+ expert-labeled vignettes linking context to coping strategies with effectiveness and harm-risk ratings (PHI-free), enabling contextual strategy ranking. Phase 2 is a governed outcomes cohort capturing consented, real-world strategy use with dose/adherence and validated outcomes at 30/90 days (PHQ-9, GAD-7, WHO-5), evaluated via a models-to-data server (no row-level export). ROB turns context→strategy→outcome into measurable supervision with benchmarks for NDCG@k, dose–response, and calibrated 30/90-day forecasts. By filling this gap, ROB could catalyze precision mental health—a domain with $1T+$ global costs.
Submission Number: 501
Loading