Data Augmentation for Intent Classification with Off-the-shelf Large Language ModelsDownload PDF

Anonymous

04 Mar 2022 (modified: 05 May 2023)NLP for ConvAIReaders: Everyone
Keywords: data augmentation, intent classification, large language models
TL;DR: We study the applicability of different GPT models for augmenting training data for few-shot intent classification.
Abstract: Data augmentation is a widely employed technique to alleviate the problem of data scarcity. In this work, we propose a prompting-based approach to generate labelled training data for intent classification with off-the-shelf language models (LMs) such as GPT-3. An advantage of this method is that no task-specific LM-fine-tuning for data generation is required; hence the method requires no hyper parameter tuning and is applicable even when the available training data is very scarce. We evaluate the proposed method in a few-shot setting on four diverse intent classification tasks. We find that GPT-generated data significantly boosts the performance of intent classifiers when intents in consideration are sufficiently distinct from each other. In tasks with semantically close intents, we observe that the generated data is less helpful. Our analysis shows that this is because GPT often generates utterances that belong to a closely-related intent instead of the desired one. We present preliminary evidence that a prompting-based GPT classifier could be helpful in filtering the generated data to enhance its quality.
0 Replies

Loading