Knowledge Graph Integration and Self-Verification for Comprehensive Retrieval-Augmented Generation

Published: 11 Sept 2024, Last Modified: 11 Sept 20242024 KDD Cup CRAG WorkshopEveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: Retrieval Augmented Generation, Large Language Model
Abstract: Retrieval-Augmented Generation (RAG) has gained significant attention from both academic researchers and the industry as a promising solution to address the knowledge limitations of large language models (LLMs). However, LLMs often exhibit hallucination phenomena when employing RAG. To effectively address hallucination phenomena in a wide range of question types, we employ various choices and strategies. Specifically, we utilize LLaMA3's emergent self-verification capability to determine whether the given reference can adequately answer a particular question, thereby avoiding hallucination phenomena. Subsequently, by utilizing knowledge graphs to augment our knowledge base, we enhance contextual understanding and reduce hallucinations on RAG. LLM's advanced capabilities further enable us to effectively integrate and interpret the contents of knowledge graphs, ensuring more coherent and accurate responses. Finally, the effective handling of these diverse question types allows us to provide precise and informative answers, tailored to the specific requirements of each query. In general, our work comprehensively utilizes the advanced capabilities of LLM to enhance the robustness and credibility of our information retrieval system. This multi-faceted approach, coupled with a meticulous evaluation of references, ensures the delivery of high-quality responses, irrespective of the complexity of the questions.
Submission Number: 13
Loading