Evaluating Large Language Models for Colonoscopy Preparation Assistance: Correctness and Diversity in Synthetic Dialogues
Abstract: p>Background: Colorectal cancer is the third leading cause of cancer-related deaths in the United States, and colonoscopy remains the gold standard for early detection and prevention. However, many procedures are postponed due to inadequate bowel preparation, a preventable failure often caused by patients9 difficulty in understanding or following written prep instructions. Prior interventions such as reminder apps and instructional videos have improved adherence only modestly, largely because they cannot answer patients9 specific questions. Recent advances in large language models (LLMs) raise the possibility of developing conversational assistants that can provide an interactive support to patients in procedure preparation. Objective: This study evaluated correctness and diversity of synthetic dialogues generated by leading LLMs acting as both simulated AI Coaches and patients for colonoscopy preparation. Methods: Four leading LLMs, OpenAI9s o3 and GPT-4.1, Meta9s Llama 3.3 70B, and Mistral9s Large-2411 were used to generate 250 patient-AI Coach dialogues per model. Prompts were designed to elicit diverse patient questions about diet, medications, and other prep-related topics. Human raters, including medical experts, evaluated responses for correctness, error type, and potential harmfulness. Automatic evaluation using an LLM-as-a-judge approach complemented human evaluation. Results: Leading models approached but did not achieve adequate performance. Closed-weight models (GPT-4.1, o3) outperformed open-weight models (Llama, Mistral) on correctness, while multi-prompt generation substantially improved question diversity. All models produced harmful errors, primarily due to omissions or misinterpretations of prep instructions. Conclusions: While LLMs demonstrate strong potential for colonoscopy preparation support, none are yet reliable enough for unsupervised deployment in patient-facing contexts without effective safety layers.</p>
External IDs:doi:10.1101/2025.11.19.25340596
Loading