everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Existing retrieval-based frameworks to enhance large language models (LLMs) requires accessibility to a rich non-parametric knowledge source which contains factual information that directly solves the query. Reasoning based approaches heavily rely on the parametric knowledge of the model to provide domain-specific explicit reasoning chain. However, inclusive knowledge sources are expensive or infeasible to build for scientific or corner domains, thus not applicable in either training or inference time. To tackle the challenges, we introduce Graph Inspired Veracity Extrapolation (GIVE), a novel reasoning framework that integrates the parametric and non-parametric memories to enhance both knowledge retrieval and faithful reasoning processes using very limited external clues. By leveraging the structured knowledge to inspire LLM to model the interconnections among relevant concepts, our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval. Specifically, the framework prompts LLMs to decompose the query into crucial concepts and attributes, construct entity groups with relevant entities, and build an augmented reasoning chain by probing potential relationships among node pairs across these entity groups. Our method incorporates both factual and extrapolated linkages to enable comprehensive understanding and response generation. Extensive experiments on domain-specific and open-domain benchmarks demonstrate the effectiveness of our proposed method, thereby underscoring the efficacy of integrating structured information and internal reasoning ability of LLMs for tackling difficult tasks with limited external resources.