One Process Spatiotemporal Learning of Transformers via Vcls Token for Multivariate Time Series Forecasting

Published: 01 Jan 2024, Last Modified: 13 May 2025ICANN (6) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Previous Transformer-based models for multivariate time series forecasting mainly focus on temporal dependence learning and neglect the association between variables. The recent method of adding Attention on spatial (variate) tokens before or after temporal learning effectively makes improvements. However, this method of overall association homogenizes the complexity of different variables and cannot learn accurate spatiotemporal dependencies for them. Also, it brings an increase of great complexity, especially for a large number of variables. We propose a Variable class (Vcls) token for temporal Transformers to make improvements. Through the proposed TLCC-SC module, accurate and inclusive variable categories are produced for the generation of the Vcls token. Therefore, the temporal tokens in Transformer achieve strongly correlated cross-spatiotemporal dependencies from different variables within the same class by Attention to the Vcls token. Our method can effectively make general improvements for temporal Transformers and achieve consistent performance with SOTAs on challenging real-world datasets. The code is available at: https://github.com/Joeland4/Vcls.
Loading