How do transformers trained on next-token prediction represent their inputs? Our analysis reveals that in simple settings, transformers form intermediate representations with fractal structures distinct from, yet closely related to, the geometry of belief states of an optimal predictor. We find the algorithmic process by which these representations form and connect this mechanism to constrained belief updating equations, offering insight into the geometric meaning of these fractals. These findings bridge the gap between the model-agnostic theory of belief state geometry and the specific architectural constraints of transformers.
Keywords: computational mechanics, mechanistic interpretability, belief state geometry
TL;DR: Transformers trained on sequences from simple Hidden Markov Models form fractal intermediate representations related to, but distinct from, optimal belief state geometries, which can be explained by constrained belief updating equations.
Abstract:
Submission Number: 31
Loading