Abstract: Currently, many networks like recurrent neural networks and graph convolution networks are paying more attention to traffic prediction. However, there are still some limitations like lack of consideration of the dynamics between spatial and temporal features, loss of short-term to long-term prediction correlation, and dimensional information destroyed by self-attention. To address these issues, a novel transformer, i.e., Spatiotemporal Encode-Again Transformer (SEAT), is proposed for traffic prediction. In the SEAT, two components, spatial-temporal cross attention, and encode-again strategy are designed to learn spatiotemporal features and capture the relationship among forecasting series. We conducted experiments on several public datasets, METR-LA, PeMS-Bay, and PeMS-S. In particular, SEAT outperforms existing models by up to 6% improvement in RMSE measurement. The experimental results verify that SEAT can better learn the spatiotemporal features and can help lead to more efficient traffic control and management,
External IDs:dblp:conf/smc/YouSZ023
Loading