Recurrent neural network language generation for spoken dialogue systemsOpen Website

2020 (modified: 03 Feb 2022)Comput. Speech Lang. 2020Readers: Everyone
Abstract: Highlights • RNNLG framework for language generation. • Heuristically gated LSTM generator. • Semantically controlled LSTM generator. • Attentive encoder decoder generator. • Data counterfeiting. • Discriminative training. • Corpus-based evaluation in a single domain setting. • Human evaluation in a single domain setting. • Corpus-based evaluation for domain adaptation. • Human evaluation for domain adaptation. Abstract Natural Language Generation (NLG) is a critical component of spoken dialogue systems and it has a significant impact both on usability and perceived quality. Most existing NLG approaches in common use employ rules and heuristics and tend to generate rigid and stylised responses without the natural variation of human language. Moreover, these limitations also add significantly to development costs and make the delivery of cross-domain, cross-lingual dialogue systems especially complex and expensive. The first contribution of this paper is to present RNNLG, a Recurrent Neural Network (RNN)-based statistical natural language generator that can learn to generate utterances directly from dialogue act – utterance pairs without any predefined syntaxes or semantic alignments. The presentation includes a systematic comparison of the principal RNN-based NLG models available. The second contribution, is to test the scalability of the proposed system by adapting models from one domain to another. We show that by pairing RNN-based NLG models with a proposed data counterfeiting method and a discriminative objective function, a pre-trained model can be quickly adapted to different domains with only a few examples. All of the findings presented are supported by both corpus-based and human evaluations.
0 Replies

Loading