Keywords: large language models, knowledge graphs
Abstract: Large Language Models (LLMs) have demonstrated remarkable performance in reasoning tasks but face challenges such as hallucinations and outdated knowledge, particularly in complex scenarios that require precise and reliable reasoning. Knowledge Graphs (KGs), with their structured, factual nature, provide a promising solution by serving as an external knowledge source to enhance LLMs performance. However, the vast scale of KGs complicates the retrieval of relevant information.
Existing approaches mainly leverage LLMs to *generate* retrieval plans
used in logical forms or relation paths for querying KGs, while the generated plans may mismatch with valid relations in KGs, e.g., predict $language\\_spoken$ for valid relation $languages\\_spoken$. Despite being similar minor inconsistency, it can lead the generated retrieval plan unexecutable. To address this limitation, we propose a novel framework, Combining on Graphs (CoG), where LLMs act as combiners rather than generators. Specifically, rather than directly generating a retrieval plan, CoG encourages LLMs to utilize specified relationships existing within KGs to *combine* a relational path as the retrieval plan. This approach constrains the output space of LLMs to be structured rather than arbitrary, ensuring the generated retrieval plan aligns with the structure of KGs, making it more reliable and adaptable to diverse KGs. Extensive experiments on a range of datasets and reasoning tasks demonstrate that the effectiveness of CoG.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 16324
Loading