GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation

27 Sept 2024 (modified: 16 Mar 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Model, Structured Reasoning, Biomedical QA, Intelligent Agent
Abstract:

Existing retrieval-based frameworks to enhance large language models (LLMs) requires accessibility to a rich non-parametric knowledge source which contains factual information that directly solves the query. Reasoning based approaches heavily rely on the parametric knowledge of the model to provide domain-specific explicit reasoning chain. However, inclusive knowledge sources are expensive or infeasible to build for scientific or corner domains, thus not applicable in either training or inference time. To tackle the challenges, we introduce Graph Inspired Veracity Extrapolation (GIVE), a novel reasoning framework that integrates the parametric and non-parametric memories to enhance both knowledge retrieval and faithful reasoning processes using very limited external clues. By leveraging the structured knowledge to inspire LLM to model the interconnections among relevant concepts, our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval. Specifically, the framework prompts LLMs to decompose the query into crucial concepts and attributes, construct entity groups with relevant entities, and build an augmented reasoning chain by probing potential relationships among node pairs across these entity groups. Our method incorporates both factual and extrapolated linkages to enable comprehensive understanding and response generation. Extensive experiments on domain-specific and open-domain benchmarks demonstrate the effectiveness of our proposed method, thereby underscoring the efficacy of integrating structured information and internal reasoning ability of LLMs for tackling difficult tasks with limited external resources.

Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12555
Loading