A Universal Prompt Generator for Large Language Models

Published: 01 Nov 2023, Last Modified: 12 Dec 2023R0-FoMo SpotlightEveryoneRevisionsBibTeX
Keywords: Language Models, Automatic Prompting, Language Feedback
TL;DR: In this work, we introduce UniPrompt, a novel approach for automatically generating high-quality, structured and human-like prompts from scratch, given just a one line description of task, for language models.
Abstract: LLMs are primarily reliant on high-quality and task-specific prompts. However, the prompt engineering process relies on clever heuristics and requires multiple iterations. Some recent works attempt to automate this process by improving upon human written prompts. However, creating high-quality prompts from scratch is still an unresolved challenge owing to its inherent complexity. In this work, we propose UniPrompt, a novel technique for generating high-quality human-like prompts from scratch. To do so, we identify characteristic features of human-generated prompts such as being detailed and consisting of multiple sections. Our proposed method, UniPrompt, takes as input a single sentence description of the task and generates human-like sectioned prompts using an auxiliary language model. We train the model in two stages. First, the model is finetuned on multiple tasks using a novel dataset curated using GPT-4 across over 500 tasks. Second, we align the auxiliary model to generate task-relevant (high accuracy) prompts by collecting a prompt preference dataset and optimizing the model using the Direct Preference Optimization method. Importantly, UniPrompt is task-agnostic: once trained, it can be used to generate prompts for any task. We find that UniPrompt outperforms human-generated prompts, GPT-generated prompts, and other prompt optimization techniques across diverse tasks on medicine, causality, and hate speech by up to 5.1 %, 7.2 %, and 11.1 % respectively.
Submission Number: 115
Loading