Large Language Models Reasoning on Knowledge Graph for Evidence Grounded Generation

Sunstella 2023 Summer Research Camp Submission12 Authors

15 Jun 2023 (modified: 22 Jun 2023)Sunstella 2023 Summer Research Camp SubmissionEveryoneRevisions
Keywords: LLM reasoning, knowledge graph, evidence grounded generation
TL;DR: We propose an approach that leverages multi-subgraph-of-thought reasoning in knowledge graphs to enhance LLMs' knowledge capabilities in medical tasks.
Abstract: Impressive capabilities have been demonstrated by large-scale language models (LLMs) in natural language processing, particularly in conversational systems. However, challenges have arisen when applying these models to acquire medical knowledge, diagnose with doctors' expertise, and recommend medications due to a lack of external domain-specific knowledge and constraints guiding their output. In this paper, a novel approach is proposed that employs multi-subgraph-of-thought reasoning in multiple knowledge graph subgraphs. The approach decomposes complex tasks into several subtasks and collects evidence from various reasoning chains to improve the factual accuracy, precision, and timeliness of LLMs while addressing inherent shortcomings. LLMs' knowledge capabilities are enhanced by leveraging external knowledge and self-learning without updating model weights through this method. The proposed technique is applied to medical dialogue diagnosis and drug recommendation tasks crucial for human life, and its effectiveness in providing interpretable and explainable results is demonstrated. The reasoning process has been visualized to support interpretability. Experimental results show significant improvement compared to traditional knowledge-assisted LLM reasoning and other methods.
Submission Number: 12
Loading