Improving negation detection with negation-focused pre-trainingDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=kVeV2zg8EV
Paper Type: Short paper (up to four pages of content + unlimited references and appendices)
Abstract: Negation is a common linguistic feature that is crucial in many language understanding tasks, yet it remains a hard problem due to diversity in its expression in different types of text. Recent works show that state-of-the-art NLP models underperform on samples containing negation in various tasks, and that negation detection models do not transfer well across domains. We propose a new negation-focused pre-training strategy, involving targeted data augmentation and negation masking, to better incorporate negation information into language models. Extensive experiments on common benchmarks show that our proposed approach improves negation detection performance and generalizability over the strong baseline NegBERT (Khandelwal and Sawant, 2020).
Presentation Mode: This paper will be presented in person in Seattle
Virtual Presentation Timezone: UTC+10
Copyright Consent Signature (type Name Or NA If Not Transferrable): Truong Hung Thinh
Copyright Consent Name And Address: Truong Hung Thinh, The University of Melbourne, 700 Swanston Street, Carlton, Melbourne VIC 3053
0 Replies

Loading