Interpreting psychiatric digital phenotyping data with large language models: a preliminary analysis

Matthew Flathers, Winna Xia, Christine Hau, Benjamin W Nelson, Jiaee Cheong, James Burns, John Torous

Published: 01 Sept 2025, Last Modified: 14 Oct 2025CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: Background Digital phenotyping provides passive monitoring of behavioural health but faces implementation challenges in translating complex multimodal data into actionable clinical insights. Digital navigators, healthcare staff who interpret patient data and relay findings to clinicians, provide a solution, but workforce limitations restrict scalability.Objective This study provides one of the first systematic evaluation of large language model performance in interpreting simulated psychiatric digital phenotyping data, establishing baseline accuracy metrics for this emerging application.Methods We evaluated GPT-4o and GPT-3.5-turbo across over 153 test cases covering various clinical scenarios, timeframes and data quality levels using simulated test datasets currently employed in training human digital navigators. Performance was assessed on the model’s capacity to identify clinical patterns relative to human digital navigation experts.Findings GPT-4o demonstrated 52% accuracy (95% CI 46.5% to 57.6%) in identifying clinical patterns based on standard test cases, significantly outperforming GPT-3.5-turbo (12%, 95% CI 8.4% to 15.6%). When analysing GPT-4o’s performance across different scenarios, strongest results were observed for worsening depression (100%) and worsening anxiety (83%) patterns while weakest performance was seen for increased home time with improving symptoms (6%). Accuracy declined with decreasing data quality (69% for high-quality data vs 39% for low-quality data) and shorter timeframes (60% for 3-month data vs 43% for 3-week data).Conclusions GPT-4o’s 52% accuracy in zero-shot interpretation of psychiatric digital phenotyping data establishes a meaningful baseline, though performance gaps and occasional hallucinations confirm human oversight in digital navigation tasks remains essential. The significant performance variations across models, data quality levels and clinical scenarios highlight the need for careful implementation.Clinical implications Large language models could serve as assistive tools that augment human digital navigators, potentially addressing workforce limitations while maintaining necessary clinical oversight in psychiatric digital phenotyping applications.
Loading