Keywords: reservoir compting; echo state networks; recurrent neural networks;
TL;DR: Introducting a novel framework for the construction of efficient, randomized RNNs based on diagonal linear recurrence in the complex space.
Abstract: Reservoir Computing (RC) has established itself as an efficient paradigm for temporal processing, yet its scalability remains severely constrained by the necessity of processing temporal data sequentially. In this work, we revisit RC through the lens of structured operators and state space modeling, introducing Parallel Echo State Network (ParalESN), a framework that enables the construction of efficient reservoirs with diagonal linear recurrence in the complex space that can be parallelized during training. We provide a theoretical analysis demonstrating that ParalESN preserves the Echo State Property and the universality guarantees of classical Echo State Networks while admitting an equivalent representation of arbitrary linear reservoirs in the complex diagonal form. Empirically, ParalESN attains comparable predictive accuracy to traditional RC on memory and forecasting benchmarks, while delivering substantial gains in training efficiency. On 1-D pixel-level classification tasks, the model achieves competitive accuracy with fully trainable networks, reducing computational costs and energy consumption. Overall, ParalESN offers a promising, scalable, and principled pathway for integrating RC within the deep learning landscape.
Primary Area: learning on time series and dynamical systems
Submission Number: 7872
Loading