Paragraph-based Transformer Pretraining for Multi-Sentence InferenceDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by fine-tuning transformer-based models as individual sentence-pair classifiers. Recent studies show that these tasks benefit from modeling dependencies across multiple candidate sentences `jointly'. In this paper, we first show that popular pretrained transformers perform poorly when used for fine-tuning on multi-candidate inference tasks. We then propose a new pretraining objective that models the paragraph-level semantics across multiple input sentences. Our evaluation on three AS2, and one fact verification dataset demonstrates the superiority of our pretrained joint models over pretrained transformers for multi-candidate inference tasks.
Paper Type: short
0 Replies

Loading