RIFF: Rephrasing Inputs for Few-shot Fine-tuning of Language ModelsDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: We propose an efficient idea for Rephrasing Inputs for Few-shot Fine-tuning of language models (RIFF) with prompt optimization and efficient tuning methods.
Abstract: Pre-trained Language Models (PLMs) have shown remarkable performance when fine-tuned for downstream text processing tasks. Recently, researchers have effectively guided PLMs to perform specific tasks by optimizing input prompts (prompt optimization) or adjusting a small number of model parameters (efficient tuning). In this study, we explore the impact of altering the input text of the original task while fine-tuning language models with these recent prompt optimization and efficient tuning methods. To most effectively rewrite the input text, we apply a learning objective based on Maximum-Marginal Likelihood estimation in a few-shot setting. Experimenting with seven few-shot text classification datasets, we show that enriching training examples with the input text’s paraphrases at train and test time significantly enhances the performance of recent prompt optimization and efficient tuning techniques.
Paper Type: long
Research Area: Efficient/Low-Resource Methods for NLP
Contribution Types: Approaches to low-resource settings, Approaches low compute settings-efficiency
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview