Inverse is Better! Fast and Accurate Prompt for Slot TaggingDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Prompting methods recently achieve impressive success in few-shot learning. These methods embed input samples with prompt sentence pieces and decode label-related tokens to map samples to the label. However, such a paradigm is very inefficient for the task of slot tagging. Because the slot tagging samples are multiple consecutive words in a sentence, the prompting methods have to enumerate all n-grams token span to find all the possible slots, which greatly slows down the prediction. To tackle this, we introduce an inverse paradigm for prompting. Different from the classic prompts map tokens to labels, we reversely predict slot values given slot types. Such inverse prompting only requires a one-turn prediction for each slot type and greatly speeds up the prediction. Besides, we propose a novel Iterative Prediction Strategy, from which the model learns to refine predictions by considering the relations between different slot types. We find, somewhat surprisingly, the proposed method not only predicts faster, but also significantly improves the effect (improve over 6.1 F1-scores on 10-shot setting) and achieves new state-of-the-art performance.
0 Replies

Loading