everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Inspired by the hospital expert consultation model, this paper proposes a conversational medical visual language model for orthopedics, named WenXinGPT (Multi-disciplinary Collaboration). The core concept of this work focuses on aligning medical visual and textual representations to leverage high-quality data for generating expert consultation dialogues across hospital departments. The primary objective is to uncover orthopedic knowledge within medical intelligence models and enhance their reasoning abilities in an interpretable manner without requiring additional training. Our research particularly emphasizes zero-shot scenarios, and the results from experiments on 16 datasets provided by Peking Union Medical College Hospital demonstrate that the proposed WenXinGPT framework excels at mining and utilizing medical expertise within large language models, while also expanding their reasoning capabilities. Based on these findings, we conducted manual evaluations to identify and categorize common errors in our methods, along with ablation studies aimed at understanding the impact of various factors on overall performance.