Knowledge-Consistent Dialogue Generation with Knowledge GraphsDownload PDF

16 May 2022 (modified: 05 May 2023)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: knowledge-grounded dialogue generation, knowledge graph
Abstract: Pre-trained generative language models have achieved impressive performances on dialogue generation tasks. However, when generating responses for a conversation that requires complicated factual knowledge, they are far from perfect, due to the lack of mechanisms to retrieve, encode, and reflect the knowledge in the generated responses. Unlike the methods working with unstructured text that are inefficient in retrieving and encoding the knowledge, some of the knowledge-grounded dialogue generation methods tackle this problem by leveraging the structured knowledge from the Knowledge Graphs (KGs). However, existing methods do not guarantee that the language model utilizes a relevant piece of knowledge for the given dialogue, and that the model generates dialogues which are consistent with the knowledge, from the KG. To overcome this limitation, we propose SUbgraph Retrieval-augmented GEneration (SURGE), a framework for generating knowledge-consistent, context-relevant dialogues with a KG. Specifically, our method first retrieves the relevant subgraph from the given KG, and then enforces consistency across the facts by perturbing their word embeddings conditioned on the retrieved subgraph. Then, it learns the latent representation space using graph-text multi-modal contrastive learning which ensures that the generated texts have high similarity to the retrieved subgraphs. We validate the performance of our SURGE framework on the OpendialKG dataset and show that our method does generate high-quality dialogues that faithfully reflect the knowledge from the KG.
TL;DR: A novel knowledge-consistent dialogue generation framework with knowledge graphs, that is realized by context-relevant subgraph retrieval, invariant graph encoding, and graph-text contrastive learning.
Supplementary Material: zip
20 Replies

Loading