DLM-Scope: Mechanistic Interpretability of Diffusion Language Models via Sparse Autoencoders

Published: 02 Mar 2026, Last Modified: 06 Mar 2026ICLR 2026 Trustworthy AIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: diffusion language model, sparse autoencoder, interpretability
Abstract: Sparse autoencoders (SAEs) have become a standard tool for mechanistic interpretability in autoregressive large language models (LLMs), enabling researchers to extract sparse, human-interpretable features and intervene on model behavior. Recently, as diffusion language models (DLMs) have become an increasingly promising alternative to autoregressive LLMs, it is essential to develop tailored mechanistic interpretability tools for this emerging class of models. In this work, we present **DLM-Scope**, the first SAE-based interpretability framework for DLMs, and demonstrate that trained Top-$k$ SAEs can faithfully extract interpretable features. Notably, we find that inserting SAEs affects DLMs differently than autoregressive LLMs: while SAE insertion in LLMs typically incurs a loss penalty, in DLMs it can reduce cross-entropy loss when applied to early layers, a phenomenon absent or markedly weaker in LLMs. Additionally, SAE features in DLMs enable more effective diffusion-time interventions, often outperforming LLM steering. Moreover, we pioneer new SAE-based research directions for DLMs: we show that SAEs can provide useful signals for DLM decoding order, and that SAE features remain stable during the post-training phase of DLMs. Our work establishes a foundation for mechanistic interpretability in DLMs and highlights the potential of applying SAEs to DLM-related tasks and algorithms.
Submission Number: 145
Loading