Dual Diffusion Learning for Knowledge-Grounded Dialogue Generation

ACL ARR 2024 June Submission2747 Authors

15 Jun 2024 (modified: 06 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Knowledge-grounded dialogue generation plays a crucial role in the intelligent conversational agents. However, previous work suffers from inadequate control information in both knowledge selection and dialogue generation. Firstly, priori-based knowledge selection lacks a posteriori distribution, while posterior-based methods suffer from biases at the inference and training stages. Secondly, the conventional autoregressive generation lacks precise control over the injection of knowledge, leading to unintended shifts in focus of response. To address these limitations, we propose a Controllable Dual Diffusion Learning model, which serves as an enhanced framework for knowledge-grounded dialogue generation through the controllable modules. Our approach formulates response generation and knowledge generation as dual tasks to fully leverage prior and posterior knowledge, and to avoid training and inference biases. We optimize knowledge selection by employing knowledge labels generated by the dual module and iteratively update the generated dialogue with global-related knowledge information. Experimental results on two public datasets demonstrate that our approach achieves significant improvements in both automatic and manual evaluations.
Paper Type: Long
Research Area: Dialogue and Interactive Systems
Research Area Keywords: Diffusion Learning, Dual Learning, Knowledge-Grounded Dialogue Generation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 2747
Loading