Spatial-Temporal Context Model for Remote Sensing Imagery Compression

Jinxiao Zhang, Runmin Dong, Juepeng Zheng, Mengxuan Chen, Lixian Zhang, Yi Zhao, Haohuan Fu

Published: 28 Oct 2024, Last Modified: 04 Nov 2025CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: With the increasing spatial and temporal resolutions of obtained remote sensing (RS) images, effective compression becomes critical for storage, transmission, and large-scale in-memory processing. Although image compression methods achieve a series of breakthroughs for daily images, a straightforward application of these methods to RS domain underutilizes the properties of the RS images, such as content duplication, homogeneity, and temporal redundancy. This paper proposes a Spatial-Temporal Context model (STCM) for RS image compression, jointly leveraging context from a broader spatial scope and across different temporal images. Specifically, we propose a stacked diagonal masked module to expand the contextual reference scope, which is stackable and maintains its parallel capability. Furthermore, we propose spatial-temporal contextual adaptive coding to enable the entropy estimation to reference context across different temporal RS images at the same geographic location. Experiments show that our method outperforms previous state-of-the-art compression methods on rate-distortion (RD) performance. For downstream tasks validation, our method reduces the bitrate by 52 times for single temporal images in the scene classification task while maintaining accuracy.
Loading