Towards Intrinsic Interpretability of Large Language Models: A Survey of Design Principles and Architectures
Keywords: Large Language Models, Intrinsic Interpretability, Explainable AI, Model Transparency
Abstract: While Large Language Models (LLMs) have achieved strong performance across many NLP tasks, their opaque internal mechanisms hinder trustworthiness and safe deployment. Existing surveys in explainable AI largely focus on post-hoc explanation methods that interpret trained models through external approximations. In contrast, intrinsic interpretability, which builds transparency directly into model architectures and computations, has recently emerged as a promising alternative. This paper presents the first systematic review of the recent advances in intrinsic interpretability for LLMs, categorizing existing approaches into five design paradigms: functional transparency, concept alignment, representational decomposability, explicit modularization, and latent sparsity induction. We further discuss open challenges and outline future research directions in this emerging field.
Paper Type: Long
Research Area: Special Theme (conference specific)
Research Area Keywords: Special Theme: Explainability of NLP Models
Contribution Types: Surveys
Languages Studied: English
Submission Number: 7219
Loading