Stable Natural Language Understanding via Invariant Causal ConstraintDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Natural Language Understanding (NLU) task requires the model to understand the underlying semantics of input text. However, recent analyses demonstrate that NLU models tend to utilize dataset biases to achieve high dataset-specific performances, which always leads to performance degradation on out-of-distribution (OOD) samples. Toincrease the performance stability, previous debiasing methods \emph{empirically} capture bias features from data to prevent model from corresponding biases. However, we argue that, the semantic information can form a \emph{causal} relationship with the target labels of the NLU task, while the biases information is only \emph{correlative} to the target labels. Such difference between the semantic information and dataset biases still remains not fully addressed, which limits the effectiveness of debiasing. To address this issue, we analyze the debiasing process under a \emph{causal perspective}, and present a causal invariance based stable NLU framework (CI-sNLU).
0 Replies

Loading