Automated Few-Shot Prompt Generation For and From Large Language ModelsDownload PDF

Anonymous

16 Oct 2023ACL ARR 2023 October Blind SubmissionReaders: Everyone
Abstract: Few-shot prompts are difficult for humans to construct, but can be critical for the performance of large language models on downstream tasks. I propose a framework of automatically generating few-shot prompts by selecting high-quality outputs sampled from the model itself. I apply it to code generation. In testing the framework, I use High Probability Branching, a novel tree-based systematic search, demonstrated to outperform conventional sampling in accuracy and efficiency. I evaluate the performance of the framework by applying it to the GPT-J model with a subset of the HumanEval dataset. The prompt generated by the framework achieves a ten percent relative improvement over model performance with no prompt; the improvement is six times the improvement from the human prompt.
Paper Type: long
Research Area: Generation
Contribution Types: NLP engineering experiment
Languages Studied: English, Python
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies

Loading