Quality > Quantity: Synthetic Corpora from Foundation Models for Closed-Domain Extractive Question Answering

16 Jun 2023 (modified: 01 Dec 2023)Submitted to EMNLP 2023EveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Question Answering
Keywords: Closed Domain Question Answering, Prompt Engineering, Foundational Models, Domain Adaptation
TL;DR: PreTraining on corpus from generative text models is helpful for in-domain pre-training for extractive question answering.
Abstract: Domain adaptation, the process of training a model in one domain and applying it to another, has been extensively explored in machine learning. While training a domain-specific foundation model (FM) from scratch is an option, recent methods have focused on adapting pre-trained FMs for domain-specific tasks. However, our experiments reveal that either approach does not consistently achieve state-of-the-art (SOTA) results in the target domain. In this work, we study extractive question answering within closed domains and introduce the concept of targeted pre-training. This involves determining and generating relevant data to further pre-train our models, as opposed to the conventional philosophy of utilizing domain-specific FMs trained on a wide range of data. Our proposed framework uses Galactica to generate synthetic, ``targeted'' corpora that align with specific writing styles and topics, such as research papers and radiology reports. This process can be viewed as a form of knowledge distillation. We apply our method to two biomedical extractive question answering datasets, COVID-QA and RadQA, achieving a new benchmark on the former and demonstrating overall improvements on the latter. Code available upon publication.
Submission Number: 4092
Loading