Keywords: Adaptive state-space modeling, High-order time series, Markov order estimation, Stability-aware inference, Dynamical system recovery
TL;DR: We propose an adaptive framework that learns explicit high-order state-space models from time series, using stability-aware filtering for robust and interpretable dynamics.
Abstract: Explicit, equation-discovery models promise transparent mechanisms and strong extrapolation for time-series dynamics. Yet most existing methods impose first-order structure, even when the true system depends on multiple lags. This mismatch is typically absorbed by inflating the latent state via ad-hoc augmentation, which erodes identifiability, complicates learning, and weakens interpretability. Compounding the issue, defaulting to Kalman-style updates in nonlinear or weakly stable regimes is brittle: inference degrades away from fixed points, biasing parameter estimates and reducing predictive reliability.
We introduce a framework for \emph{adaptive high-order dynamics modeling}. Given an $m$-dimensional series, we \emph{initialize the latent dimension to $m$} and estimate the Markov order $p$—the minimal number of past states needed to predict the next—via a conditional mutual information test. Rolling statistics assess proximity to attractors and drive \emph{stability-aware} filter selection. Starting from $(p,m)$, an inference–learning loop evaluates candidate structures and guides a unidirectional search that converges to $(\hat p,\hat m)$ together with the associated system parameters. Across benchmark datasets, the resulting models yield more flexible latent dynamics and consistently improve predictive accuracy over state-of-the-art baselines.
Supplementary Material: pdf
Primary Area: learning on time series and dynamical systems
Submission Number: 20077
Loading