Learning How to Prompt with Large Language Models

ACL ARR 2024 June Submission3904 Authors

16 Jun 2024 (modified: 07 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The remarkable performance of large language models (LLMs) heavily depends on the prompts they receive. Inappropriate prompts can significantly hinder their performance or trigger undesirable behaviors, such as the amplification of societal biases. Traditional methods for addressing these issues often overlook valuable information from LLMs' pre-training phases and handle training data one by one, leading to a loss of crucial information. This paper presents an innovative framework called Learning to Prompt (L2P), which combines an LLM-based optimizer with meta-learning and the chain of thought mechanism. L2P enables effective optimization for each individual prompt and generalizes to new prompt optimization, significantly improving LLM performance. Our extensive evaluations confirm the superior performance of L2P over state-of-the-art methods.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: model bias/fairness evaluation, model bias/unfairness mitigation, ethical considerations in NLP applications;
Languages Studied: English
Submission Number: 3904
Loading