Keywords: medical inquiry, agent, evaluation
TL;DR: MAQuE is a benchmark that evaluates AI-driven medical questioning capabilities across multiple metrics, using 3,000 patient simulations with diverse behavioral traits.
Abstract: An effective physician should possess a combination of empathy, expertise, patience, and clear communication when treating a patient.
Recent advances have successfully endowed AI doctors with expert diagnostic skills, particularly the ability to actively seek information through inquiry. However, other essential qualities of a good doctor remain overlooked.
To bridge this gap, we present MAQuE (Medical Agent Questioning Evaluation), the largest-ever benchmark for the automatic and comprehensive evaluation of medical multi-turn questioning. It features 3,000 realistically simulated patient agents that exhibit diverse linguistic patterns, cognitive limitations, emotional responses, and tendencies for passive disclosure. We also introduce a multi-faceted evaluation framework, covering task success, inquiry proficiency, dialogue competence, inquiry efficiency, and patient experience.
Experiments on different LLMs reveal substantial challenges across the evaluation aspects. Even state-of-the-art models show significant room for improvement in their inquiry capabilities. These models are highly sensitive to variations in realistic patient behavior, which considerably impacts diagnostic accuracy. Furthermore, our fine-grained metrics expose trade-offs between different evaluation perspectives, highlighting the challenge of balancing performance and practicality in real-world clinical settings.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 1897
Loading