Abstract: AI-based compression is gaining popularity for traditional photos and videos. However, such techniques do not typically scale well to the task of compressing hyperspectral images, and may have computational requirements in terms of memory usage and total floating point operations that are prohibitive for usage onboard of satellites. In this paper, we explore the design of a predictive compression method based on a novel neural network design, called LineRWKV. Our neural network predictor works in a line-by-line fashion limiting memory and computational requirements thanks to a recurrent inference mechanism. However, in contrast to classic recurrent networks, it relies on an attention operation that can be parallelized for training, akin to Transformers, unlocking efficient training on large datasets, which is critical to learn complex predictors. In our preliminary results, we show that LineRWKV significantly outperforms the state-of-the-art CCSDS-123 standard and has competitive throughput.
Loading