Comprehensive Transformer-Based Model Architecture for Real-World Storm PredictionOpen Website

Published: 01 Jan 2023, Last Modified: 29 Sept 2023ECML/PKDD (7) 2023Readers: Everyone
Abstract: Storm prediction provides the early alert for preparation, avoiding potential damage to property and human safety. However, a traditional storm prediction model usually incurs excessive computational overhead due to employing atmosphere physical equations and complicated data assimilation. In this work, we strive to develop a lightweight and portable Transformer-based model architecture, which takes satellite and radar images as its input, for real-world storm prediction. However, deep learning-based storm prediction models commonly have to address various challenges, including limited observational samples, intangible patterns, multi-scale resolutions of sensor images, etc. To tackle aforementioned challenges for efficacious learning, we separate our model architecture into two stages, i.e., “representation learning” and “prediction”, respectively for extracting the high-quality feature representation and for predicting weather events. Specifically, the representation learning stage employs (1) multiple masked autoencoders (MAE)-based encoders with different scalability degrees for extracting multi-scale image patterns and (2) the Word2vec tool to enact their temporal representation. In the prediction stage, a vision transformer (ViT)-based encoder receives the input sequence derived from packing the image patterns and their temporal representation together for storm prediction. Extensive experiments have been carried out, with their results exhibiting that our comprehensive transformer-based model can achieve the overall accuracy of $$94.4\%$$ for predicting the occurrence of storm events, substantially outperforming its compared baselines.
0 Replies

Loading