Fast Domain Adaptation for Goal-Oriented Dialogue Using a Hybrid Generative-Retrieval TransformerDownload PDFOpen Website

Published: 01 Jan 2020, Last Modified: 10 Nov 2023ICASSP 2020Readers: Everyone
Abstract: Goal-oriented dialogue systems are now widely adopted in industry, where practical aspects of using them becomes of key importance. As such, it is expected from such systems to fit into a rapid prototyping cycle for new products and domains. For data-driven dialogue systems (especially those based on deep learning) that amounts to maintaining production-level performance having been provided with a few `seed' dialogue examples, normally referred to as data efficiency.With extremely data-dependent deep learning methods, the most promising way to achieve practical data efficiency is transfer learning-i.e., leveraging a greater, highly represented data source for training a base model, then fine-tuning it to available in-domain data.In this paper, we present a hybrid generative-retrieval model that can be trained using transfer learning. By using GPT-2 as the base model and fine-tuning it to the multidomain MetaLWOz dataset, we obtain a robust dialogue model able to perform both response generation and ranking <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">1</sup> . Combining both, it outperforms several competitive generative-only and retrieval-only baselines, measured by language modeling quality on MetaLWOz as well as in goal- oriented metrics (Intent/Slot Fl-scores) on the MultiWoz corpus.
0 Replies

Loading