Keywords: large language models (LLMs), medical diagnosis, patient communication, health literacy, understandability, empathy, bias, AI4Good
Abstract: Large language models (LLMs) show promise for supporting diagnostic communication by generating explanations and guidance for patients. Yet their ability to produce outputs that are both understandable and empathetic remains uncertain. We assess two leading LLMs on medical diagnostic scenarios, measuring understandability with readability metrics and empathy through LLM-as-a-Judge compared to human ratings. Our results indicate that LLMs adapt explanations to sociodemographic variables and patient conditions. However, they also generate overly complex content and display biased affective empathy, leading to uneven accessibility and support. These patterns underscore the need for systematic calibration to ensure equitable patient communication.
Supplementary Material: zip
Submission Number: 163
Loading