Probing the Decision Boundaries of In-context Learning in Large Language Models

Published: 09 Oct 2024, Last Modified: 15 Dec 2024MINT@NeurIPS2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: in-context learning; Large language models; LLM decision boundary
Abstract: In-context learning in large language models enables them to generalize to new tasks by prompting with a few exemplars without explicit parameter updates. In this work, we propose a new mechanism to probe and understand in-context learning from the lens of decision boundaries for in-context classification. Decision boundaries qualitatively demonstrate the inductive biases of standard classifiers. Surprisingly, we find that the decision boundaries learned by current LLMs in simple binary classification tasks are irregular and non-smooth. We investigate factors influencing these boundaries and explore methods to enhance their generalizability. Our findings offer insights into in-context learning dynamics and practical improvements for enhancing its robustness and generalizability.
Email Of Author Nominated As Reviewer: siyanz@cs.ucla.edu
Submission Number: 30
Loading