Keywords: transformers, world models, sparse autoencoders (SAEs), mechanistic interpretability, positional encodings, residual stream, feature activation, circuit analysis, model generalization
TL;DR: Transformers trained to solve mazes are found to acquire causal World Models. These can be discovered by, and and intervened upon, through the use of SAEs, providing insights into their structure and causal characteristics.
Abstract: Recent studies in interpretability have explored the inner workings of transformer models trained on tasks across various domains, often discovering that these networks naturally develop highly structured representations. When such representations comprehensively reflect the task domain's structure, they are commonly referred to as ``World Models" (WMs). In this work, we identify WMs in transformers trained on maze-solving tasks. By using Sparse Autoencoders (SAEs) and analyzing attention patterns, we examine the construction of WMs and demonstrate consistency between SAE feature-based and circuit-based analyses. By subsequently intervening on isolated features to confirm their causal role, we find that it is easier to activate features than to suppress them. Furthermore, we find that models can reason about mazes involving more simultaneously active features than they encountered during training; however, when these same mazes (with greater numbers of connections) are provided to models via input tokens instead, the models fail. Finally, we demonstrate that positional encoding schemes appear to influence how World Models are structured within the model’s residual stream.
Submission Number: 60
Loading