Keywords: Large Language Models, Prompt Engineering, Automatic Prompt Generation
TL;DR: We present a fast automatic prompting method that uses synthesized few-shot examples and matches/outperforms recent automatic prompting methods on text tasks and GSM8K while using substantially less computation and training data.
Abstract: LLMs are highly sensitive to prompt design, but handcrafting effective prompts
is difficult and often requires intricate crafting of few-shot examples. We propose
a fast automatic prompt construction algorithm that augments human instructions
by generating a small set of few shot examples. Our method iteratively replaces/drops/keeps few-shot examples using Monte Carlo Shapley estimation of example
utility. For faster execution, we use aggressive subsampling and a replay buffer
for faster evaluations. Our method can be run using different compute time budgets. On a limited budget, we outperform existing automatic prompting methods
on text simplification and GSM8K and obtain second best results on classification and summarization. With an extended, but still modest compute budget we
set a new state of the art among automatic prompting methods on classification,
simplification and GSM8K. Our results show that carefully constructed examples,
rather than exhaustive instruction search, are the dominant lever for fast and data
efficient prompt engineering. We will make code and data publicly available upon
acceptance.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 3177
Loading