Abstract: Few-shot text classification aims to recognize unseen classes with limited labeled text samples. Typical meta-learning methods, e.g., Prototypical Networks, face several problems. (1) The limited words in each sentence make it difficult to extract fine-grained class-related semantic information. (2) The semantic information from labels is not fully utilized, leading to ambiguities in class definitions. (3) The randomly selected support samples cannot represent their corresponding classes well. In this paper, we propose to leverage label semantics tackling the above problems and present Label Guided Prototype Networks (LGPN). Firstly, we use prompt encoding to generate text representations instead of aggregating the words in the sentences, extracting more class-related semantic information. Secondly, we propose Label-guided Distance Scaling (LDS), in the training stage, we design label-guided loss to pull the samples closer to their corresponding labels, making class distributions distinguishable. Thirdly, in the testing stage, we scale the text representations with the label semantics to pull each support sample closer to the class center, which reduces the prediction contradictions caused by randomly selected support samples (i.e., unsatisfactory support sample representations). We conduct extensive experiments on six benchmark datasets, and our LGPN shows obvious advantages over state-of-the-art models. Additionally, we further explore the effectiveness and universality of our modules.
Loading