Keywords: In-Context Leanring, Demonstration Synthesis
Abstract: Considering the high cost of labeling demonstrations for in-context learning (ICL), many works propose synthesizing demonstrations using LLMs. However, the quality of the demonstrations synthesized is limited by the capabilities and knowledge of LLMs themselves. To address this, inspired by transfer learning, we propose In-Context Transfer Learning (ICTL), which synthesizes target task demonstrations by transferring labeled demonstrations from similar source tasks. ICTL consists of two steps: source sampling and target transfer. First, we define an optimization objective that minimizes transfer error to sample source demonstrations similar to the target task. Then, we employ LLMs to transfer the sampled source demonstrations to the target task. The results on 8 datasets of 4 mainstream tasks show that ICTL has achieved 1.7% relative performance improvement compared with the existing methods, achieving the state-of-the-art (SOTA) performance of demonstration synthesis.
Paper Type: Long
Research Area: Natural Language Generation
Research Area Keywords: few-shot QA, prompting
Contribution Types: NLP engineering experiment, Approaches to low-resource settings
Languages Studied: English
Submission Number: 1519
Loading