Linguistically-Inspired and Explainable Demonstration Retrieval for In-Context Learning

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: In-Context Learning, Demonstration Retrieval, Explainability
Abstract: In-context learning (ICL) is an emerging ability of language models, and its effectiveness hinges on the selection of effective in-context examples for every query. Existing research predominantly rely on retrieval techniques to curate such potential examples for each query. These examples are then ranked by a specialized scoring language model, distinguishing between positive (effective) and negative (ineffective) examples as demonstrations. These results then inform the training of a dense retriever to select effective demonstrations for queries at test time. Existing approaches suffers from narrow selection criteria, lack of explainability, and limited robustness and transferability. This paper introduces a novel approach, grounded in linguistic principles, which defines the key criteria that effective demonstrations should meet. These criteria are language model agnostic, demonstrate superior performance not only in a standard ICL setting but also in domain adaptation settings and in contexts devoid of task-specific instructions, provide explanations for selecting demonstrations, and shed light on inherent biases in existing methods. The proposed approach outperforms five strong baselines across seven tasks. Notably, it achieves higher performance than explicitly optimized models for ICL, such as MetaICL, highlighting its potential applications on large scale models.
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3456
Loading