SWAformer: A novel shifted window attention Transformer model for accurate power distribution prediction
Abstract: In recent years, deep learning models have been increasingly applied to power distribution prediction due to the success of time series prediction methods. These models extract features from the data and capture the overall trend, leading to more accurate predictions than traditional approaches. Global information refers to the overall trend or pattern across the entire dataset, while local information pertains to specific variations or changes within smaller regions of the data. However, focusing solely on global information while neglecting these local variations can reduce prediction accuracy and increase model complexity. To address these challenges, we propose a novel Shifted Window Attention Transformer model that enhances local information learning by iteratively computing self-attention within regions of varying window sizes. Additionally, patch relative position encoding is incorporated to better preserve spatiotemporal information. The model operates in a bottom-up manner, where the cascaded input data progressively extract features from local to global levels. This approach efficiently integrates both global and local information, resulting in improved performance. The effectiveness of our model was demonstrated on widely used public datasets, where it achieved superior prediction results for power distribution tasks. By optimizing grid operators’ decisions, our approach enhances the accuracy and efficiency of power distribution and supports the advancement and modernization of power systems.
Loading