MKG-Rank: Enhancing Large Language Models with Knowledge Graph for Multilingual Medical Question Answering

ACL ARR 2025 May Submission740 Authors

15 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) have shown remarkable progress in medical question answering (QA), yet their effectiveness remains predominantly limited to English due to imbalanced multilingual training data and scarce medical resources for low-resource languages. To address this critical language gap in medical QA, we propose Multilingual Knowledge Graph-based Retrieval Ranking (MKG-Rank), a knowledge graph-enhanced framework that enables English-centric LLMs to perform multilingual medical QA. It first extracts and translates only key terms and inserts matching knowledge‑graph facts into an English‑trained LLM, delivering low‑cost, accurate medical QA in multiple languages. Extensive experiments on four benchmarks—Chinese, Japanese, Korean, and Swahili—show that MKG‑Rank consistently surpasses zero‑shot baselines by up to 35.03%. The same approach yields statistically significant gains under chain‑of‑thought prompting and remains effective on selected small language models, confirming backbone and prompt agnosticism. Case studies further demonstrate that MKG‑Rank surfaces the retrieved facts alongside each answer, providing transparent supporting evidence and paving the way for trustworthy, explainable multilingual medical QA.
Paper Type: Short
Research Area: Multilingualism and Cross-Lingual NLP
Research Area Keywords: Medical Question Answering, Multilingualism, Retrieval-augmented Generation, Knowledge Graphs
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: Chinese, Japanese, Korean, Swahili
Keywords: Medical Question Answering, Multilingualism, Retrieval-augmented Generation, Knowledge Graphs
Submission Number: 740
Loading