Interpretable Embeddings with Sparse Autoencoders: A Data Analysis Toolkit

Published: 30 Sept 2025, Last Modified: 16 Nov 2025Mech Interp Workshop (NeurIPS 2025) SpotlightEveryoneRevisionsBibTeXCC BY 4.0
Open Source Links: https://github.com/nickjiang2378/interp_embed
Keywords: Sparse Autoencoders, Applications of interpretability
TL;DR: We show SAEs are a versatile tool for data analysis through four tasks--data diffing, correlations, targeted clustering, and retrieval.
Abstract: Analyzing large-scale text corpora is a core challenge in machine learning, crucial for tasks like identifying undesirable model behaviors or biases in training data. Current methods often rely on costly LLM-based techniques (e.g. annotating dataset differences) or dense embedding models (e.g. for clustering), which lack control over the properties of interest. We propose using sparse autoencoders (SAEs) to create $\textit{SAE embeddings}$: representations whose dimensions map to interpretable concepts. Through four data analysis tasks, we show that SAE embeddings can find novel data insights while offering the controllability that dense embeddings lack and costing less than LLMs. By computing statistical metrics over our embeddings, we can uncover insights such as (1) semantic differences between datasets and (2) unexpected concept correlations in documents. For example, by comparing model responses, we find that Grok-4 clarifies ambiguities more often than nine other frontier models. Relative to LLMs, SAE embeddings uncover bigger differences at 2-8× lower cost and identify biases more reliably. Additionally, SAE embeddings are controllable: by filtering concepts, we can (3) cluster documents along axes of interest and (4) outperform dense embeddings on property-based retrieval. Using SAE embeddings, we study model behavior with two case studies: investigating how OpenAI model behavior has changed over new releases and finding a learned spurious correlation from Tulu-3's (Lambert et al, 2024) training data. These results position SAEs as a versatile tool for unstructured data analysis and highlight the neglected importance of interpreting models through their $\textit{data}$.
Submission Number: 84
Loading