Abstract: Integrating large language models (LLMs) with medical knowledge graphs presents a promising frontier in healthcare AI, enabling more accurate clinical decision support, patient-specific recommendations, and interpretable diagnostic reasoning. However, the complexity of multi-step reasoning over medical ontologies and patient data graphs reveals the limitations of current Chain-of-Thought-based approaches. These methods struggle with incomplete subgraph retrieval, inefficient multi-hop reasoning across clinical entities, and challenges in contextualizing longitudinal patient data. To address these limitations, we propose SRR-RAG, a structured reasoning retrieval framework tailored for the medical domain. SRR-RAG enhances traditional retrieval-augmented generation by explicitly encoding clinical relationships, temporal constraints, and multi-hop dependencies within medical queries. This structured approach supports comprehensive reasoning across complex medical graphs, enabling accurate and interpretable responses for tasks like differential diagnosis or treatment planning. To mitigate semantic ambiguity and cognitive bias in structured query generation, we introduce type-aware pre-anchoring and reflective reasoning strategies. These mechanisms improve the alignment between natural language queries and graph-based medical knowledge, enhancing retrieval precision and clinical relevance. Extensive experiments on benchmark datasets and simulated electronic health records demonstrate that SRR-RAG significantly outperforms existing Graph RAG approaches in retrieval accuracy, reasoning completeness, and computational efficiency.
External IDs:dblp:journals/hisas/WangWSCG25
Loading