Keywords: neural decoding, nonlinear dynamics, manifold learning, causal inference, behavior prediction, neuroimaging
Abstract: Decoding and forecasting human behavior from neuroimaging data is a fundamental challenge spanning neuroscience, artificial intelligence, and machine learning. Naturalistic tasks such as real-world navigation generate complex, nonlinear dynamics that are difficult to model: linear methods cannot capture these interactions, while deep learning architectures often overfit in the limited and noisy data regimes typical of fMRI. We introduce Manifold Dimensional Expansion (MDE), a simple yet powerful prediction algorithm grounded in dynamical systems theory. Leveraging the generalized Takens theorem and Simplex projection, MDE reconstructs latent state spaces directly from voxelwise fMRI signals and integrates feature selection with cross-validation to identify causally relevant neural drivers of behavior. Applied to a naturalistic driving task, MDE predicts Steering, Acceleration, and Braking from fMRI time series with accuracy comparable to or exceeding regression and Transformer baselines. Crucially, MDE is the first method to combine strong predictive performance with guaranteed mechanistic interpretability, as it does not rely on latent variables. This property enables causal insights into brain–behavior dynamics. Such interpretability is essential in neuroscience, where the goal is not only to predict but also to discover and understand the mechanisms linking neural activity to behavior, insights that are critical for advancing scientific understanding and guiding interventions. More broadly, our results demonstrate that manifold-based dynamical embeddings offer a principled path toward accurate, causally grounded forecasting of complex nonlinear systems in domains where interpretability is as important as performance.
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Submission Number: 20925
Loading