Abstract: Neural networks can accurately forecast complex dynamical systems, yet how they internally represent underlying dynamics remains poorly understood. We study neural forecasters through the lens of representational alignment, introducing anchor-based, geometry-agnostic relative embeddings that remove rotational and scaling ambiguities in latent spaces. Applying this framework across seven canonical dynamical systems—ranging from periodic to chaotic—we reveal reproducible family-level structure: multilayer perceptrons align with other MLPs, recurrent networks with RNNs, while transformers and echo-state networks achieve strong forecasts despite weaker alignment. Alignment generally correlates with forecasting accuracy, yet high accuracy can coexist with low alignment.
Relative geometry thus provides a simple, reproducible foundation for comparing how model families internalize and represent dynamical structure.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We corrected a few typos.
Assigned Action Editor: ~Christian_Keup1
Submission Number: 6359
Loading