Keywords: Deep learning, sequence model, state space model, S4, HiPPO, diagonal state space
TL;DR: We explain and compare variants of diagonal state spaces and introduce simpler versions
Abstract: State space models (SSM) have recently been shown to be very effective as a deep learning layer as a promising alternative to sequence models such as RNNs, CNNs, or Transformers.
The first version to show this potential was the S4 model, which is particularly effective on tasks involving long-range dependencies by using a prescribed state matrix called the HiPPO matrix.
While this has an interpretable mathematical mechanism for modeling long dependencies,
it also requires a custom representation and algorithm that makes the model difficult to understand and implement.
On the other hand, a recent variant of S4 called DSS showed that restricting the state matrix to be fully diagonal can still preserve the performance of the original model when using a specific initialization based on approximating S4's matrix.
This work seeks to systematically understand how to parameterize and initialize diagonal state space models.
While it follows from classical results that almost all SSMs have an equivalent diagonal form, we show that the initialization is critical for performance.
First, we explain why DSS works mathematically, as the diagonal approximation to S4 surprisingly recovers the same dynamics in the limit of infinite state dimension.
We then systematically describe various design choices in parameterizing and computing diagonal SSMs, and perform a controlled empirical study ablating the effects of these choices.
Our final model S4D is a simple diagonal version of S4 whose kernel computation requires just 3 lines of code and performs comparably to S4 in almost all settings, with state-of-the-art results in image, audio, and medical time-series domains, and 85\% average on the Long Range Arena benchmark.
Supplementary Material: zip
14 Replies
Loading