Keywords: LLM, In-context learning, Classification
Abstract: *In-context learning* (ICL), as an emergent behavior of large language models (LLMs), has exhibited impressive capability in solving previously unseen tasks through observing several given in-context examples without further training. However, recent works find that LLMs irregularly obtain unexpected fragmented decision boundaries in simple machine learning classification tasks (e.g., binary linear classification). Although some efforts have been made in this problem, the phenomenon remains under-explored. Thus, in this paper, we first explore the in-context learning capability of LLMs with both implicit and explicit reasoning paradigms. Our observations on the behaviors of LLMs indicate that LLMs consistently fail to achieve smooth decision boundaries in all cases and implicit reasoning is able to achieve better decision boundaries than explicit reasoning. Moreover, LLMs tend to address classification tasks in the way of machine learning algorithms. With these basic observations, we propose to dive into the behaviors of LLMs for a deeper understanding of their in-context learning capability on discriminative tasks.
To this end, we conduct a series of analyses on LLMs to explore how LLMs perform discriminative tasks. We explore the behaviors of LLMs in performing classification by prompting LLMs with specified machine learning algorithms and in high-dimensional classification tasks. Then, we propose a method to determine whether LLMs implicitly leverage machine learning algorithms when addressing classification tasks. Moreover, we also rethink the decision boundaries of LLMs from the perspective of data distributions. Overall, our analyses provide important observations and insights into the behaviors of LLMs in the discriminative tasks.
Primary Area: foundation or frontier models, including LLMs
Supplementary Material: zip
Submission Number: 4316
Loading