Inversive-Reasoning Augmentation for Natural Language Inference

Published: 01 Jan 2024, Last Modified: 21 Aug 2025ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Natural language inference (NLI) aims to infer the relationship between two texts: premise and hypothesis. However, many existing methods overlook the problem of overestimation of model performance due to superficial correlation biases in NLI datasets. We study this problem and find that most current models have taken NLI as one of the text-matching tasks, which ignores the asymmetry of the premise and hypothesis of NLI. Therefore, we propose a simple and effective augmentation method, Inversive-Reasoning Augmentation (IRA), to remove the superficial correlation bias. After training the different NLI models with our IRA-augmented data based on two widely-used NLI datasets, we observe more fair evaluation results of the performance and robustness of the various NLI models.
Loading