Abstract: This paper presents a novel approach for grounding GPT-based models using knowledge graphs (KGs) to develop domain-constrained dialogue agents with consistent personalities. We introduce the KG-grounded GPT model and compare its capacity to resonate with a general audience against two established models for this task: a persona-grounded GPT model and a relevance-based classifier. Furthermore, we compare all the models against a RAG model in terms of hallucination error rates. Through these human evaluation studies, we demonstrate that the KG-grounded GPT model outperforms existing approaches, yielding higher-quality responses with significantly reduced hallucination errors. Moreover, we highlight the scalability of our method, as it does not require fine-tuning and is straightforward to implement.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: Dialogue Systems, Generative AI, Grounding LLMs
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 3544
Loading