H-DDx: A Hierarchical Evaluation Framework for Differential Diagnosis
Keywords: Differential Diagnosis, Large Language Models, Hierarchical Evaluation, Evaluation Framework
TL;DR: We propose H-DDx, a hierarchical evaluation framework that leverages ICD-10 taxonomy to better assess LLMs' differential diagnosis capabilities by rewarding clinically relevant near-misses that conventional flat metrics overlook.
Abstract: An accurate differential diagnosis (DDx) is essential for patient care, shaping therapeutic decisions and influencing outcomes. Recently, Large Language Models (LLMs) have emerged as promising tools to support this process by generating a DDx list from patient narratives. However, existing evaluations of LLMs in this domain primarily rely on flat metrics, such as Top-k accuracy, which fail to distinguish between clinically relevant near-misses and diagnostically distant errors. To mitigate this limitation, we introduce **H-DDx**, a hierarchical evaluation framework that better reflects clinical relevance. H-DDx leverages a retrieval and reranking pipeline to map free-text diagnoses to ICD-10 codes and applies a hierarchical metric that credits predictions closely related to the ground-truth diagnosis. In benchmarking 22 leading models, we show that conventional flat metrics underestimate performance by overlooking clinically meaningful outputs, with our results highlighting the strengths of domain-specialized open-source models. Furthermore, our framework enhances interpretability by revealing hierarchical error patterns, demonstrating that LLMs often correctly identify the broader clinical context even when the precise diagnosis is missed.
Submission Number: 48
Loading