Few-Shot Learning with Siamese Networks and Label TuningDownload PDF

Anonymous

16 Oct 2021 (modified: 05 May 2023)ACL ARR 2021 October Blind SubmissionReaders: Everyone
Abstract: We study the problem of building text classifiers with little or no training data, commonly known as zero and few-shot text classification.In recent years, an approach based on neural textual entailment models has been found to give strong results on a diverse range of tasks.In this work, we show that with proper pre-training, Siamese networks that embed texts and labels area competitive alternative.These models allow for a large reduction in inference cost: constant in the number of labels rather than linear.Furthermore, we introduce label tuning, a simple and computationally efficient approach that allows to adapt the models in a few-shot setup by only changing the label embeddings.While giving lower performance than model fine-tuning, this approach has the architectural advantage that a single encoder can be shared by many different tasks.
0 Replies

Loading