Transfer-Prompting: Enhancing Cross-Task Adaptation in Large Language Models via Dual-Stage Prompts Optimization
Abstract: Large Language Models (LLMs) face significant challenges in real-world applications that require simultaneously achieving high-quality responses and adhering to specific instructions. To address these issues, we introduce \textbf{Transfer-Prompting}, a novel two-stage framework designed to improve cross-task adaptation in prompt generation. The framework comprises two main components: (1) \textbf{source prompt construction}, which refines prompts on source task datasets to enhance their generalization capability, and (2) \textbf{target prompt generation}, which fine-tunes high-performing source prompts on task-specific datasets to optimize cross-task performance.
In each optimization cycle, a reference LLM generates candidate prompts based on historical prompt-score pairs and task descriptions in the reference prompt. These candidate prompts are iteratively refined, with a scorer LLM evaluating their effectiveness using an objective prompt evaluator. This feedback loop facilitates continuous refinement, improving prompt quality and task-specific performance.
We validate Transfer-Prompting through extensive experiments involving 25 LLMs, including 7 foundational and 18 specialized models, across 9 diverse datasets. The results demonstrate that Transfer-Prompting significantly enhances task-specific performance, highlighting its potential to improve cross-task adaptation in LLMs.
Paper Type: Long
Research Area: Machine Learning for NLP
Research Area Keywords: prompt engineering, transfer learning, domain adaptation, few-shot learning
Contribution Types: NLP engineering experiment
Languages Studied: None
Submission Number: 1073
Loading