Abstract: Interpreting the internal process of neural models has long been a challenge.
This challenge remains relevant in the era of large language models (LLMs) and in-context learning (ICL); for example, ICL poses a new issue of interpreting which example in the few-shot examples contributed to identifying/solving the task.
To this end, in this paper, we design synthetic diagnostic tasks of inductive reasoning, inspired by the generalization tests in linguistics; here, most in-context examples are ambiguous w.r.t. their underlying rule, and one critical example disambiguates the task demonstrated.
The question is whether conventional input attribution (IA) methods can track such a reasoning process, i.e., identify the influential example, in ICL.
Our experiments provide several practical findings; for example, a certain simple IA method works the best, and the larger the model, the generally harder it is to interpret the ICL with gradient-based IA methods.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: feature attribution, probing
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 8075
Loading