Relative Geometry of Neural Forecasters: Linking Accuracy and Alignment in Learned Dynamics

TMLR Paper6359 Authors

02 Nov 2025 (modified: 06 Nov 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Neural networks can accurately forecast complex dynamical systems, yet how they internally represent underlying dynamics remains poorly understood. We study neural forecasters through the lens of representational alignment, introducing anchor-based, geometry-agnostic relative embeddings that remove rotational and scaling ambiguities in latent spaces. Applying this framework across seven canonical dynamical systems—ranging from periodic to chaotic—we reveal reproducible family-level structure: multilayer perceptrons align with other MLPs, recurrent networks with RNNs, while transformers and echo-state networks achieve strong forecasts despite weaker alignment. Alignment generally correlates with forecasting accuracy, yet high accuracy can coexist with low alignment. Relative geometry thus provides a simple, reproducible foundation for comparing how model families internalize and represent dynamical structure.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We corrected a few typos.
Assigned Action Editor: ~Christian_Keup1
Submission Number: 6359
Loading