Context-Aware Transformer Pre-Training for Answer Sentence SelectionDownload PDF

Anonymous

03 Sept 2022 (modified: 05 May 2023)ACL ARR 2022 September Blind SubmissionReaders: Everyone
Abstract: Answer Sentence Selection (AS2) is one of the main components for building an accurate Question Answering pipeline. AS2 models rank a set of candidate sentences based on how likely they answer a given question. The state of the art in AS2 exploits pre-trained transformers by transferring them on large annotated datasets, while using local contextual information around the candidate sentence. In this paper, we propose three pre-training objectives designed to mimic the downstream fine-tuning task of contextual AS2. This allows for specializing language models when fine-tuning for contextual AS2. Our experiments with continuous pre-training of RoBERTa and ELECTRA using two public and two large-scale industrial datasets show that our pre-training approaches can improve the accuracy of baseline of contextual AS2 by up to 2.4%.
Paper Type: short
0 Replies

Loading