Building Model-Driven Knowledge Graphs via Large Language Models

Published: 01 Jan 2024, Last Modified: 01 Aug 2025ADBIS (Short Papers) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We consider a special case of knowledge graph construction from text, where the target knowledge graph is structured as specific Directed Acyclic Graph and the input text has the form of a recipe. The intention of this paper is to present a case study that uses a large language model (LLM) for the knowledge extraction process. We formulate knowledge extraction as a model-driven structure recovery process and demonstrate that LLMs can be effectively used in the process. We demonstrate through extensive experiments that using LLMs a zero-shot process produces a wide range of errors. To remedy them, we propose two different model-driven prompting strategies by which LLMs can be used to improve the accuracy of knowledge graph construction. We demonstrate that a state memoization technique introduces an accuracy-efficiency tradeoff that demands further research.
Loading