A Multi-Level Supervised Contrastive Learning Framework for Low-Resource Natural Language InferenceDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 20 Jul 2023IEEE ACM Trans. Audio Speech Lang. Process. 2023Readers: Everyone
Abstract: Natural Language Inference (NLI) is a growingly essential task in natural language understanding, which requires inferring the relationship between the sentence pairs ( <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">premise</b> and <bold xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">hypothesis</b> ). Recently, low-resource natural language inference has gained increasing attention, due to significant savings in manual annotation costs and a better fit with real-world scenarios. Existing works fail to characterize discriminative representations between different classes with limited training data, which may cause faults in label prediction. Here we propose a multi-level supervised contrastive learning framework named MultiSCL for low-resource natural language inference. MultiSCL leverages a sentence-level and pair-level contrastive learning objective to discriminate between different classes of sentence pairs by bringing those in one class together and pushing away those in different classes. MultiSCL adopts a data augmentation module that generates different views for input samples to better learn the latent representation. The pair-level representation is obtained from a cross attention module. We conduct extensive experiments on two public NLI datasets in low-resource settings, and the accuracy of MultiSCL exceeds other models by 1.8%, 3.1% and 4.1% on SNLI, MNLI and Sick with 5 instances per label respectively. Moreover, our method outperforms the previous state-of-the-art method on cross-domain tasks of text classification.
0 Replies

Loading