Are Pre-trained Transformers Robust in Intent Classification? A Missing Ingredient in Evaluation of Out-of-Scope Intent DetectionDownload PDF

Anonymous

04 Mar 2022 (modified: 05 May 2023)NLP for ConvAIReaders: Everyone
Keywords: Language Understanding (NLU / SLU)
TL;DR: An investigation towards whether pre-trained Transformers are robust in intent classification w.r.t. general and relevant OOS examples.
Abstract: Pre-trained Transformer-based models were reported to be robust in intent classification. In this work, we first point out the importance of in-domain out-of-scope detection in few-shot intent recognition tasks and then illustrate the vulnerability of pre-trained Transformer-based models against samples that are in-domain but out-of-scope (ID-OOS). We construct two new datasets, and empirically show that pre-trained models do not perform well on both ID-OOS examples and general out-of-scope examples, especially on fine-grained few-shot intent detection tasks.
0 Replies

Loading