Causal Augmentation for Causal Sentence ClassificationDownload PDF

Anonymous

16 May 2021 (modified: 05 May 2023)ACL ARR 2021 May Blind SubmissionReaders: Everyone
Abstract: Scarcity of corpora with annotated causal texts can lead to poor robustness when training state-of-the-art language models for causal sentence classification. In particular, we find that these models misclassify on augmented sentences that have been negated or strengthened in terms of their causal meaning. This is worrying because minor linguistic changes in causal sentences can have disparate meanings. To resolve such issues, we propose to generate counterfactual causal sentences by creating contrast sets (Gardner et al., 2020). However, we notice an important finding that simply introducing edits is not sufficient to train models with counterfactuals. We thus introduce heuristics, like sentence shortening or multiplying key causal terms, to emphasize semantically important keywords to the model. We demonstrate these findings on different training setups and across two out-of-domain corpora. Our proposed mixture of augmented edits consistently achieves improved performance compared to baseline across two models and both within and out of corpus' domain, suggesting our proposed augmentation also helps the model generalize.
Software: zip
0 Replies

Loading