The Role of Role Design in In-Context Learning for Large Language Models

ACL ARR 2024 June Submission4766 Authors

16 Jun 2024 (modified: 03 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In-context learning (ICL) enables Large Language Models (LLMs) to generate predictions based on prompts without additional fine-tuning. While prompt engineering has been widely studied, the impact of role design within prompts remains underexplored. This study examines the influence of role configurations in zero-shot and few-shot learning scenarios using GPT-3.5 and GPT-4o from OpenAI and Llama2-7b Llama2-13b from Meta. We evaluate the models' performance across datasets, focusing on tasks like sentiment analysis, text classification, and question answering. F1 scores are used to measure the effectiveness of different role designs. Our findings highlight the potential of role-based prompt structuring to enhance LLM performance, offering new insights for optimizing prompt design strategies in natural language processing tasks.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: prompting, applications, robustness
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 4766
Loading