MedCoT-RAG: Causal Chain-of-Thought RAG for Medical Question Answering

Published: 19 Aug 2025, Last Modified: 24 Sept 2025BSN 2025EveryoneRevisionsBibTeXCC BY 4.0
Confirmation: I have read and agree with the IEEE BSN 2025 conference submission's policy on behalf of myself and my co-authors.
Keywords: Medical Question Answering, Clinical Reasoning, Causal Inference, Large Language Models, Electronic Health Records
Abstract: Large language models (LLMs) have shown promise in medical question answering but often struggle with hallucinations and shallow reasoning, particularly in tasks requiring nuanced clinical understanding. Retrieval-augmented generation (RAG) offers a practical and privacy-preserving way to enhance LLMs with external medical knowledge. However, most existing approaches rely on surface-level semantic retrieval and lack the structured reasoning needed for clinical decision support. We introduce \textbf{MedCoT-RAG}, a domain-specific framework that combines causal-aware document retrieval with structured chain-of-thought prompting tailored to medical workflows. This design enables models to retrieve evidence aligned with diagnostic logic and generate step-by-step causal reasoning reflective of real-world clinical practice. Experiments on three diverse medical QA benchmarks show that MedCoT-RAG outperforms strong baselines by up to 10.3\% over vanilla RAG and 6.4\% over advanced domain-adapted methods, improving accuracy, interpretability, and consistency in complex medical tasks.
Track: 12. Emerging Topics (e.g. Agentic AI, LLMs for computational health with wearables)
Tracked Changes: pdf
NominateReviewer: Ziyu Wang, ziyuw31@uci.edu Elahe Khatibi, ekhatibi@uci.edu
Submission Number: 147
Loading