Abstract: Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
However, their internal mechanisms are still unclear and this lack of transparency poses unwanted risks for
downstream applications. Therefore, understanding and explaining these models is crucial for elucidating
their behaviors, limitations, and social impacts. In this article, we introduce a taxonomy of explainability
techniquesandprovideastructuredoverviewofmethodsforexplainingTransformer-basedlanguagemodels.
We categorize techniques based on the training paradigms of LLMs: traditional fine-tuning-based paradigm
and prompting-based paradigm. For each paradigm, we summarize the goals and dominant approaches for
generating local explanations of individual predictions and global explanations of overall model knowledge.
Wealsodiscussmetricsforevaluatinggeneratedexplanationsanddiscusshowexplanationscanbeleveraged
to debug models and improve performance. Lastly, we examine key challenges and emerging opportunities
for explanation techniques in theeraof LLMs in comparison toconventional deep learningmodels.
Loading