Toward a Variation-Aware and Interpretable Model for Radar Image Sequence Prediction

Published: 01 Jan 2024, Last Modified: 17 Apr 2025IEEE Trans. Ind. Informatics 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Radar image sequence prediction (RISP) aims to predict future radar images based on historical observations. In the past few years, neural network-based methods have shown impressive performance for RISP. However, two limitations stills exist. 1) They fail to exploit variation information when capturing spatial dependencies. 2) They neglect to analyze and interpret the model. In this article, we propose a variation-aware prediction model for the first limitation, and develop a relevance propagation technique for the second one. Specifically, 1) we recustomize the vanilla convolution by introducing a variation-aware item. The new convolution unit yields two advantages when capturing spatial dependencies, i.e., exploiting variation information and offering spatially-varying kernels. As a result, it can learn the diverse and complex radar echo patterns. By equipping the unit into a typical network (PredRNN), we propose a novel prediction model, dubbed as VA-PredRNN. 2) As for analyzing our model, we propagate the output backward layer by layer till the input. Hence, we can reveal the relevance between the output and the intermediate states. To the best of the authors' knowledge, this is the first work to study the interpretability of a multilayer RISP model. We conduct extensive experiments on two datasets, and the results demonstrate the effectiveness of our VA-PredRNN. We also carry out a series of analyses using the proposed relevance propagation technique. According to the results, we discover the importance of different states.
Loading