Faithful or Just Plausible? Evaluating Faithfulness for Medical Reasoning in Closed-Source LLMs

Published: 12 Oct 2025, Last Modified: 27 Nov 2025GenAI4Health 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs; faithfulness; medical; reasoning; chain-of-thought; CoT; causal ablation; positional bias; hint injection; closed-source LLMs; MedQA; human evaluation; trust; safety.
TL;DR: Black-box study of three popular closed-source LLMs probes faithfulness in medical reasoning tasks.
Abstract: Closed-source large language models (LLMs), such as ChatGPT and Gemini, are increasingly consulted for medical advice, yet their explanations may appear plausible while failing to reflect the model’s underlying reasoning process. This gap poses serious risks as patients and clinicians may trust coherent but misleading explanations. We conduct a systematic black-box evaluation of faithfulness in medical reasoning among three widely used closed-source LLMs. Our study consists of three perturbation-based probes: (1) causal ablation, testing whether stated chain-of-thought (CoT) reasoning causally influences predictions; (2) positional bias, examining whether models create post-hoc justifications for answers driven by input positioning; and (3) hint injection, testing susceptibility to external suggestions. We complement these quantitative probes with a small-scale human evaluation of model responses to patient-style medical queries to examine concordance between physician assessments of explanation faithfulness and layperson perceptions of trustworthiness. We find that CoT reasoning steps often do not causally drive predictions, and models readily incorporate external hints without acknowledgment. In contrast, positional biases showed minimal impact in this setting. These results underscore that faithfulness, not just accuracy, must be central in evaluating LLMs for medicine, to ensure both public protection and safe clinical deployment.
Submission Number: 100
Loading