UNLOCKING HIERARCHICAL CONCEPT DISCOVERY IN LANGUAGE MODELS THROUGH GEOMETRIC REGULARIZATION

Published: 13 Mar 2025, Last Modified: 16 Apr 2025BuildingTrustEveryoneRevisionsBibTeXCC BY 4.0
Track: Long Paper Track (up to 9 pages)
Keywords: Al Safety, Trustworthy AI, Hierarchical Representation Learning, Mechanistic interpretability, Sparse Autoencoder, Feature Absorption, Feature Splitting
TL;DR: Some concepts deserve to be activated more than others, hence we differentiate their l1 weights, but in a systematic way. This differentiation separates broad concepts from narrow ones, contributes to meeting the challenge of feature absorption.
Abstract: We present Exponentially-Weighted Group Sparse Autoencoders (EWG-SAE) that aims to balance reconstruction quality and feature sparsity whilst resolving emerging problem such as feature absorption in interpretable language model analysis in a linguistically principled way through geometrically decaying group sparsity. Current sparse autoencoders struggle with merged hierarchical features due to uniform regularization encouraging absorption of broader features into more specific ones (e.g., "starts with S" being absorbed into "short"). Our architecture introduces hierarchical sparsity via $K=9$ dimension groups with exponential regularization decay ($\lambda_k = \lambda_{base} \times 0.5^k$), reducing absorption while maintaining state-of-the-art reconstruction fidelity, sparse probing score, and decent $\ell_1$ loss. The geometric structure enables precise feature isolation with negative inter-group correlations confirming hierarchical organization.
Submission Number: 132
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview