Collaborative Framework for Dynamic Knowledge Updating and Transparent Reasoning with Large Language Models

Published: 01 Jan 2024, Last Modified: 25 Jul 2025AIS&P 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large-scale language models (LLMs) have made remarkable achievements in natural language processing. However, when confronted with new knowledge that is absent from their training data, they often suffer from experience issues such as inaccurate reasoning, hallucinations, and insufficient transparency in their decision-making processes. To address these challenges, we propose a framework that integrates LLMs with knowledge graphs (KGs) to enable collaborative reasoning. By employing automated prompt engineering and dynamically updating the knowledge graphs, this framework enhances reasoning accuracy. Additionally, by explicitly designing retrieval reasoning paths, the transparency and explainability of the reasoning process are significantly improved. Experimental results demonstrate that this framework achieves promising performance in knowledge graph reasoning tasks, and effectively increases the reliability of the reasoning process.
Loading