First De-Trend then Attend: Rethinking Attention for Time-Series ForecastingDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023Attention Workshop, NeurIPS 2022 PosterReaders: Everyone
Keywords: frequency attention, attention relationship, seasonal-trend decomposition, time series forecasting
TL;DR: We theoretically and empirically analyze relationships between variants of attention models in time-series forecasting, and propose a decomposition-based hybrid method that achieves better performance than current attention models.
Abstract: Transformer-based models have gained large popularity and demonstrated promising results in long-term time-series forecasting in recent years. In addition to learning attention in time domain, recent works also explore learning attention in frequency domains (e.g., Fourier domain, wavelet domain), given that seasonal patterns can be better captured in these domains. In this work, we seek to understand the relationships between attention models in different time and frequency domains. Theoretically, we show that attention models in different domains are equivalent under linear conditions (i.e., linear kernel to attention scores). Empirically, we analyze how attention models of different domains show different behaviors through various synthetic experiments with seasonality, trend and noise, with emphasis on the role of softmax operation therein. Both these theoretical and empirical analyses motivate us to propose a new method: TDformer (Trend Decomposition Transformer), that first applies seasonal-trend decomposition, and then additively combines an MLP which predicts the trend component with Fourier attention which predicts the seasonal component to obtain the final prediction. Extensive experiments on benchmark time-series forecasting datasets demonstrate that TDformer achieves state-of-the-art performance against existing attention-based models.
Financial-aid: Yes, I need financial assistance to present in-person at the workshop.
0 Replies

Loading