Dr. Assistant: Enhancing Clinical Diagnostic Inquiry via Structured Diagnostic Reasoning Data and Reinforcement Learning
Keywords: Large Language Models (LLMs), Medical AI, Clinical Decision Support, Data Synthesis, Post-training
Abstract: Clinical Decision Support Systems (CDSSs) provide reasoning and inquiry guidance for physicians, yet they face notable challenges, including high maintenance costs and low generalization capability.
Recently, Large Language Models (LLMs) have been widely adopted in healthcare due to their extensive knowledge reserves, retrieval, and communication capabilities. While LLMs show promise and excel at medical benchmarks, their diagnostic reasoning and inquiry skills are constrained.
To mitigate this issue, we propose (1) Clinical Diagnostic Reasoning Data (CDRD) structure to capture abstract clinical reasoning logic, and a pipeline for its construction, and (2) the Dr. Assistant, a clinical diagnostic model equipped with clinical reasoning and inquiry skills. Its training involves a two-stage process: SFT, followed by RL with a tailored reward function.
We also introduce a benchmark to evaluate both diagnostic reasoning and inquiry.
Our experiments demonstrate that the Dr. Assistant outperforms open-source models and achieves competitive performance to closed-source models, providing an effective solution for clinical diagnostic inquiry guidance.
Paper Type: Long
Research Area: Clinical and Biomedical Applications
Research Area Keywords: Dialogue and Interactive Systems,Information Extraction,Generation,Human-Centered NLP,NLP Applications
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Data resources
Languages Studied: Chinese
Submission Number: 3061
Loading