Keywords: Mixture of Experts, Mechanistic Interpretability, Sparsity, Superposition, Representations
Abstract: Mixture of Experts (MoE) models have become central to scaling large language models, yet their mechanistic differences from dense networks remain poorly understood. Previous work has explored how dense models use $\textit{superposition}$ to represent more features than dimensions, and how superposition is a function of feature sparsity and feature importance. MoE models cannot be explained mechanistically through this same lens. We find that neither feature sparsity nor feature importance causes discontinuous phase changes, and that network sparsity (the ratio of active to total experts) better characterizes MoEs. We develop new metrics for measuring superposition across experts. Our findings demonstrate that models with more network sparsity exhibit greater $\textit{monosemanticity}$. We propose a new definition of expert specialization based on monosemantic feature representation rather than load balancing, showing that experts naturally organize around coherent feature combinations when initialized appropriately. These results suggest that network sparsity in MoEs may enable more interpretable models without sacrificing performance, challenging the common assumption that interpretability and capability are fundamentally at odds.
Primary Area: interpretability and explainable AI
Submission Number: 22328
Loading