Large Language Model and Knowledge Graph Entangled Logical ReasoningDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Large language models (LLMs) and knowledge graphs (KGs) have complementary strengths and weaknesses for logical reasoning. LLMs exhibit strong semantic reasoning capacities, but they lack world knowledge and structured reasoning abilities. KGs contain extensive factual knowledge but have limited language understanding and reasoning flexibility. In this paper, we propose a framework LKLR that entangles LLMs and KGs for synergistic reasoning. A key technique is transforming the LLM's implicit reasoning chain into a grounded logical query over the KG, enabling seamless integration. Traversing this query grounds each inference step in KG facts while maintaining reasoning flow, combining the robust knowledge of KGs with the semantic reasoning of LLMs. Our approach synergistically integrates neural and symbolic reasoning to achieve hybrid reasoning capabilities. Experimental results on several QA benchmarks show that our proposed framework achieves state-of-the-art performance and provides transparent and reliable reasoning.
Paper Type: long
Research Area: Question Answering
Contribution Types: NLP engineering experiment
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview