More Informative Dialogue Generation via Multiple Knowledge SelectionDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Knowledge-grounded dialogue generation is a task of generating a fluent and informative response based on both dialogue context and a collection of external knowledge. There is a lot of noise in the knowledge pool, and appropriate knowledge selection plays an important role. Existing methods can only select one piece of knowledge to participate in the generation of the response, which inevitably loses some useful clues contained in the discarded candidates. In this work, we propose MSEL, a novel knowledge selector which could select multiple useful knowledge. MSEL takes the dialog context and knowledge pool as inputs and predicts a subset of knowledge pool in sequence-to-sequence manner. MSEL is easy to implement and can benefits from the generative pre-trained language models. Empirical results on the Wizard-of-Wikipedia dataset indicate that our model can significantly outperforms state-of-the-art approaches in both automatic and human evaluation.
0 Replies

Loading