q2d: Turning Questions into Dialogs to Teach Models How to Search

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Dialogue and Interactive Systems
Keywords: Large language models, dialog generation, query generation, external search API, synthetic training data, QReCC dataset, information-seeking dialogs, q2d, data generation pipeline, synthetic dialogs, human-generated dialogs, grounded responses, anaphora, outdated information, hallucinations, factually consistent responses, multi-hop QA, PaLM
TL;DR: q2d, an automatic data generation pipeline that transforms questions into information-seeking dialogs using large language models, effectively training dialog and query generation models, producing dialogs that are natural, factual, and accurate
Abstract: One of the exciting capabilities of recent language models for dialog is their ability to independently search for relevant information to ground a given dialog response. However, obtaining training data to teach models how to issue search queries is time and resource consuming. In this work, we propose $q2d$: an automatic data generation pipeline that generates information-seeking dialogs from questions. We prompt a large language model (PaLM) to create conversational versions of question answering datasets, and use it to improve query generation models that communicate with external search APIs to ground dialog responses. Unlike previous approaches which relied on human written dialogs with search queries, our method allows to automatically generate query-based grounded dialogs with better control and scale. Our experiments demonstrate that: (1) For query generation on the QReCC dataset, models trained on our synthetically-generated data achieve 90%-97% of the performance of models trained on the human-generated data; (2) We can successfully generate data for training dialog models in new domains without any existing dialog data as demonstrated on the multi-hop MuSiQue and Bamboogle QA datasets. (3) We perform a thorough analysis of the generated dialogs showing that humans find them of high quality and struggle to distinguish them from human-written dialogs.
Submission Number: 388
Loading