Fine-tuning Strategies for Domain Specific Question Answering under Low Annotation Budget ConstraintsDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: The progress introduced by pre-trained language models and their fine-tuning has resulted in significant improvements in most downstream NLP tasks. The unsupervised fine-tuning of a language model combined with further target task fine-tuning has become the standard QA fine-tuning procedure. In this work, we demonstrate that this strategy is sub-optimal for fine-tuning QA models, especially under a low QA annotation budget, which is a usual setting in practice due to the extractive QA labeling cost. We draw our conclusions by conducting an exhaustive analysis of the performance of the alternatives of the sequential fine-tuning strategy on different QA datasets. Our experiments provide one of the first investigations on how to best fine-tune a QA system under a low budget, and is therefore of the utmost practical interest for the QA practitioner.
Paper Type: long
0 Replies

Loading