Medical Question-Generation for Pre-Consultation with LLM In-Context Learning

Published: 12 Oct 2024, Last Modified: 12 Nov 2024GenAI4Health PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: question-generation, in-context learning, chain-of-thought, prompting, clinical, language model, LLM
TL;DR: A study of LLM question-generation with in-context learning for medical pre-consultation
Abstract: Pre-consultation gives healthcare providers a history of present illness (HPI) prior to a patient's visit, streamlining the visit and promoting shared decision making. Compared to a digital questionnaire, LLM-powered AI agents have proven successful in providing a more natural interface for pre-consultation. But LLM-based approaches struggle to ask productive follow-up questions and require complex prompts to guide the consultation. While effective automated prompting strategies exist for medical question-answering LLMs, the task of question generation for pre-consultation is lacking effective strategies. In this study, we develop a methodology for evaluating existing approaches to medical pre-consultation, using prior datasets of HPIs and patient-doctor dialogue. We propose a novel approach of converting abundant clinical note data into question generation demonstrations and then retrieving relevant demonstrations for in-context learning. We find this approach to question generation for pre-consultation achieves a higher recall of facts in ground truth consultations compared against competitive baselines in prior literature across a range of simultated patient personalities.
Submission Number: 61
Loading