Bridging the Gap: Integrating Knowledge Graphs into Large Language Models for Complex Question Answering

ACL ARR 2024 April Submission764 Authors

16 Apr 2024 (modified: 23 May 2024)ACL ARR 2024 April SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) have performed impressively in various natural language processing tasks. However, their inherent hallucination phenomena seriously challenge their credibility in complex reasoning. Combining explainable knowledge graphs (KGs) with LLMs is a promising path to address this challenge. However, there is a huge representation gap between structured KGs and LLMs pre-trained from unstructured text, and how to make LLMs understand and utilize KGs for complex reasoning is a challenging topic. To tackle this challenge, we propose a comprehensive method: improving retrieval capabilities for KG by integrating reasoning processes and subgraph information and enhancing LLMs' understanding and utilization of KG through an efficient yet effective KG representation and KG-related tuning. Extensive experiments on two KGQA datasets and various LLMs demonstrate that our method outperforms existing strong KGQA methods\footnote{All the code, data and model checkpoints will be publicly available at \url{https://anonymous.com}}.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: knowledge graphs
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 764
Loading