ER-ICL: Error Book Maybe More Valuable for In-context LearningDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: We have created a retrieval method to retrieve demonstrations from error books and achieved better results.
Abstract: In-context learning (ICL) with few-shot examples has emerged as a key strength of large language models (LLMs), allowing them to adapt to new tasks with just a few examples. Recent research suggests that ICL closely resembles implicit fine-tuning. Building on this, we hypothesize that demonstrations where LLMs make mistakes could offer stronger learning signals for ICL, potentially leading to enhanced performance compared to instances where LLMs predict correctly. To explore this, we created an `Error Book' comprising such demonstrations, and used a retriever to select relevant instances from this collection instead of the entire training dataset. Our experiments across two different tasks show that this Error Book based Retrieval In-Context Learning (ER-ICL) not only boosts performance but also improves retrieval efficiency by reducing the search scope. Our results indicate that leveraging error-driven demonstrations could be a valuable strategy for enhancing in-context learning.
Paper Type: short
Research Area: Information Retrieval and Text Mining
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
0 Replies

Loading