Stable Spatiotemporal Memory in Echo-State Networks via Gliotransmitter Feedback

06 Sept 2025 (modified: 01 Feb 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Echo State Networks, Reservoir Computing, Neuro–glial coupling, Astrocyte lattice, Reaction–diffusion dynamics, Gliotransmitter bias, Spatiotemporal memory, Joint echo-state property, Provable stability
TL;DR: A fast ESN is bidirectionally coupled to a diffusive astrocyte lattice whose bounded reaction–diffusion waves drive gliotransmitter additive bias, yielding a provable joint ESP and an interpretable, spatially structured slow memory.
Abstract: Reservoir computing offers simple, stable training for sequence modeling, yet vanilla reservoirs struggle to sustain \emph{long, structured memory} without operating near fragile regimes. We seek a reservoir that retains echo–state guarantees and single–shot readout training while endowing the state with \emph{slow, spatially coherent} memory. We introduce \textit{TRIGR}, a neuro-inspired reservoir that augments a fast ESN core with a bidirectionally coupled astrocytic reaction–diffusion lattice. Neuronal activity generates rectified release proxies pooled onto astrocytes; astrocytic Ca$^{2+}$ evolves via a \emph{diffusive, saturating} update on a grid Laplacian; a bounded gliotransmitter fraction feeds back to neurons as an \emph{additive bias}. A one–step delay in each cross–coupling together with globally Lipschitz kinetics yields explicit, checkable \emph{row–wise operator–norm budgets} that make the joint map a uniform contraction in a block–$\ell_\infty$ norm, providing a clean \emph{echo–state certificate} for the coupled glia–neuron dynamics. We enforce these budgets by a norm–aware initialization that rescales diffusion and footprints (astrocyte row) and jointly scales recurrent and feedback gains (neuron row). The astrocyte update is computed by an $O(M)$ 5–point stencil (or an equivalent resolvent variant), so per–step cost remains dominated by the neuronal multiply. Beyond the ESP certificate, we establish (i) quantitative input–state and input–output Lipschitz/ISS bounds, and (ii) a small–gain condition certifying stability of the \emph{autoregressive} closed loop used at inference. Empirically, on canonical long–horizon benchmarks (chaotic forecasting and real–world series), TRIGR attains longer valid prediction horizons and stable rollouts with modest overhead; results are reported with time–aware splits and multiple seeds/initializations to assess robustness.
Supplementary Material: zip
Primary Area: learning on time series and dynamical systems
Submission Number: 2595
Loading