Abstract: The primary objective of precipitation nowcasting is to predict precipitation patterns several hours in advance. Recent studies have emphasized the potential of deep learning methods for this task. To harness the correlations among various meteorological elements, existing frameworks project multiple meteorological elements into a latent space and then utilize convolutional-recurrent networks for future precipitation prediction. Although effective, the escalating model complexity may impede practical applications. This letter develops the Preformer, a streamlined Transformer framework for precipitation nowcasting that efficiently captures global spatiotemporal dependencies among multiple meteorological elements. The Preformer implements an encoder-translator-decoder architecture, where the encoder integrates spatial features of multiple elements, the translator models spatiotemporal dynamics, and the decoder combines spatiotemporal information to forecast future precipitation. Without introducing complex structures or strategies, the Preformer achieves state-of-the-art performance even with the least parameters.
Loading