A Comparative Empirical Study of Relative Embedding Alignment in Neural Dynamical System Forecasters

Published: 23 Sept 2025, Last Modified: 29 Oct 2025NeurReps 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: dynamical systems, relative representations, latent representations, forecasting, autoencoders
TL;DR: Using anchor-based relative embeddings, we compare latent spaces for neural forecasters across systems/seeds, finding family-structured alignment too often tracking accuracy; strong performance—esp. Transformers—can still occur with weaker alignment.
Abstract: We study representation alignment in neural forecasters using anchor-based, geometry-agnostic \emph{relative embeddings} that remove rotational and scaling ambiguities, enabling robust cross-seed and cross-architecture comparisons. Across diverse periodic, quasi-periodic, and chaotic systems and a range of forecasters (MLPs, RNNs, Transformers, Neural ODE/Koopman, ESNs), we find consistent family-level patterns: MLPs align with MLPs, RNNs align strongly, Transformers align least with others, and ESNs show reduced alignment on several chaotic systems. Alignment generally tracks forecasting accuracy---higher similarity predicts lower multi-step MSE---yet strong performance can occur with weaker alignment (notably for Transformers). Relative embeddings thus provide a practical, reproducible basis for comparing learned dynamics.
Submission Number: 75
Loading