Unsupervised Few-shot Adaptation of Entailment Classifiers for Robust Natural Language UnderstandingDownload PDF

Anonymous

03 Sept 2022 (modified: 05 May 2023)ACL ARR 2022 September Blind SubmissionReaders: Everyone
Abstract: Although large-scale pretrained language models (LMs) have achieved significant improvements on different natural language tasks, their fine-tuning still heavily relies on task-specific data annotation and is sensitive to adversarial evaluation examples.In this work, we propose an entailment self-training framework for improving the accuracy and robustness of unsupervised few-shot task adaptations for language understanding without using any labeled data on the target tasks. We pretrain language models on the natural language inference (NLI) task, and adapt the model to new tasks with coordinated prompts and pseudo-labels. We find that the coordinated prompts, which jointly describe the task, serve as an equivalent dimension of the training data as human labels that enables learning. The proposed method enables task-specific fine-tuning without human-generated label. Experiments on the GLUE and AdvGLUE show that the coordinated prompts constantly outperform the no-prompt and single-prompt models under the unsupervised few-shot task adaptation setting. With preliminary logic pretraining on the entailment task and self-training, an unsupervised few-shot adapted medium LM can outperform existing few-shot, large-scale LMs.
Paper Type: long
0 Replies

Loading