Keywords: Deep learning, RNN, gradient, linear recurrent neural network, state space model, S4
TL;DR: The study theoretically highlights the importance of the initial parameters for gradient-based learning of state space models
Abstract: State space models (SSMs) have gained attention by showing potential to outperform Transformers.
However, previous studies have not sufficiently addressed the mechanisms underlying their high performance
owing to a lack of theoretical explanation of SSMs' learning dynamics.
In this study, we provide such an explanation and propose an improved training strategy.
The memory capacity of SSMs can be evaluated by examining how input time series are stored in their current state.
Such an examination reveals a tradeoff between memory accuracy and length,
as well as the theoretical equivalence between the structured state space sequence model (S4) and a simplified S4 with diagonal recurrent weights.
This theoretical foundation allows us to elucidate the learning dynamics, proving the importance of initial parameters.
Our analytical results suggest that successful learning requires the initial memory structure to be the longest possible
even if memory accuracy may deteriorate
or the gradient lose the teacher information.
Experiments on tasks requiring long memory confirmed that extending memory is difficult, emphasizing the importance of initialization.
Furthermore, we found that fixing recurrent weights can be more advantageous than adapting them
because it achieves comparable or even higher performance with faster convergence.
Our results provide a new theoretical foundation for SSMs and potentially offer a novel optimization strategy.
Primary Area: learning theory
Submission Number: 7844
Loading