Trustful LLMs: Customizing and Grounding Text Generation with Knowledge Bases and Dual Decoders

Published: 24 Oct 2024, Last Modified: 16 Apr 2025EMNLPEveryoneCC BY 4.0
Abstract: Although people are impressed by content generation skills of large language models, the use of LLMs, such as ChatGPT, is limited by the domain grounding of the content. The cor rectness and groundedness of the generated content need to be based on a verified con text, such as results from Retrieval-Augmented Generation (RAG). One important issue when adapting LLMs to a customized domain is that the generated responses are often incomplete, or the additions are not verified and may even be hallucinated. Prior studies on hallucination detection have focused on evaluation metrics, which are not easily adaptable to dynamic do mains and can be vulnerable to attacks like jail breaking. In this work, we propose 1) a post processing algorithm that leverages knowledge triplets in RAG context to correct hallucina tions and 2) a dual-decoder model that fuses RAGcontext to guide the generation process.
Loading