Tutor-ICL: Guiding Large Language Models for Improved In-Context Learning Performance

ACL ARR 2024 June Submission5101 Authors

16 Jun 2024 (modified: 05 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: There has been a growing body of work focusing on the in-context learning (ICL) abilities of large language models (LLMs). However, it is an open question of how effective ICL can be. This paper presents Tutor-ICL, a simple prompting method that guides LLMs through the ICL process, inspired by how effective instructors might engage their students in learning a task. Specifically, we propose presenting exemplar answers in a comparative format rather than the traditional single-answer format. We also show that including the test instance before the exemplars can improve performance, making it easier for LLMs to focus on relevant exemplars. Lastly, we include a summarization step before attempting the test, following a common human practice. Experiments on various classification tasks, conducted across both decoder-only LLMs (Llama 2, 3) and encoder-decoder LLMs (Flan-T5-XL, XXL), show that Tutor-ICL consistently boosts performance, achieving up to a 13.76% increase in accuracy.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: prompting
Contribution Types: Position papers
Languages Studied: English
Submission Number: 5101
Loading