Keywords: Cognitive Machine Learning, Clinical Trials, Survey Fatigue, Cognitive Load, Human-Centered AI, Reinforcement Learning, Patient Engagement, Electronic Patient-Reported Outcomes (ePROs)
Abstract: Clinical trials are essential to advancing medical knowledge, but current data collection methods impose heavy cognitive demands on participants [1]. A major challenge is survey fatigue, particularly in electronic patient-reported outcomes (ePROs), where repeated questionnaires lead to habituation, superficial responses, and disengagement [2]. These effects undermine data reliability and erode trust in the trial process. We propose a cognitive machine learning framework that addresses this problem by embedding insights from cognitive science into ML-driven trial design [3].
We created a phased roadmap for combining cognitive science and ML in clinical research. Phase 1 introduces lightweight cognitive overlays into ePRO systems: adaptive phrasing, variable timing, and reinforcement learning-based adjustments designed to minimize cognitive load [1] while sustaining engagement through strategies analogous to cognitive behavioral techniques such as reframing and gradual exposure [4]. Phase 2 integrates a patient-first cognitive ML model trained with behavioral priors (e.g., Centaur-style models [5]) to sustain engagement and predict dropouts. This model delivers calibrated summaries and defers low-confidence cases to clinicians. Phase 3 reframes trials as cognitive ecosystems, where participant engagement metrics—such as attention, comprehension, and trust—become first-class endpoints alongside biomarkers.
We illustrate the feasibility of our approach through a survey fatigue case study. A conversational overlay adaptively rephrases questions, detects hesitation cues, and introduces cognitive scaffolding to sustain attention. Outputs are mapped back into standard ePRO formats with auditable traces. Early results suggest this approach can reduce missingness, improve response quality, and lower dropout risks while preserving regulatory compliance and clinician oversight [6].
Our work represents both an epistemic and ethical shift. It reframes survey responses not as noisy compliance data but as cognitively mediated signals. And by modeling participant comprehension, attention, and engagement, cognitive ML provides pathways to reduce attrition, improve inclusivity, and strengthen patient trust. This case study demonstrates how cognitively aware trials can enhance reliability, equity, and participant-centeredness.
References
[1] J. Sweller. Cognitive load during problem solving: effects on learning. Cognitive Science, 12(2):257–285, 1988.
[2] S. Rolstad, J. Adler, and A. Rydén. Response burden and questionnaire length: is shorter better? A review and meta-analysis. Value in Health, 14(8):1101–1108, 2011.
[3] A. Tversky and D. Kahneman. Judgment under uncertainty: heuristics and biases. Science, 185(4157):1124–1131, 1974.
[4] F. Lieder and T. L. Griffiths. Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43:e1, 2020.
[5] M. Binz, E. Akata, M. Bethge, et al. A foundation model to predict and capture human cognition. Nature, 644:1002–1009, 2025. doi:10.1038/s41586-025-09215-4.
[6] E. Basch, A. M. Deal, A. C. Dueck, H. I. Scher, M. G. Kris, C. Hudis, and D. Schrag. Overall survival results of a trial assessing patient-reported outcomes for symptom monitoring during routine cancer treatment. JAMA, 318(2):197–198, 2017.
Submission Number: 116
Loading