Long-term Forecasting with TiDE: Time-series Dense Encoder

Published: 11 Aug 2023, Last Modified: 11 Aug 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Recent work has shown that simple linear models can outperform several Transformer based approaches in long term time-series forecasting. Motivated by this, we propose a Multi-layer Perceptron (MLP) based encoder-decoder model, \underline{Ti}me-series \underline{D}ense \underline{E}ncoder (TiDE), for long-term time-series forecasting that enjoys the simplicity and speed of linear models while also being able to handle covariates and non-linear dependencies. Theoretically, we prove that the simplest linear analogue of our model can achieve near optimal error rate for linear dynamical systems (LDS) under some assumptions. Empirically, we show that our method can match or outperform prior approaches on popular long-term time-series forecasting benchmarks while being 5-10x faster than the best Transformer based model.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1. Added two more ablation studies. 3. Added experiments on M5 dataset. 4. Added comparison with S4 models. 5. Several improvements to writing and exposition. 6. Added more discussion about sub-quadratic approximation of self-attention in prior works. For the camera ready version: 1. We moved the theoretical guarantees section to the appendix. 2. We clarified the difference/ similarities between the dense encoder and dense decoder. 3. Changed the names of blocks in the architecture figure.
Code: https://github.com/google-research/google-research/tree/master/tide
Supplementary Material: zip
Assigned Action Editor: ~Alessandro_Sperduti1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1200
Loading