Video Compressed Sensing Reconstruction via an Untrained Network with Low-Rank Regularization

Yuanhong Zhong, Chenxu Zhang, Xun Yang, Shanshan Wang

Published: 01 Jan 2024, Last Modified: 21 Jan 2026IEEE Transactions on MultimediaEveryoneRevisionsCC BY-SA 4.0
Abstract: Deep image prior (DIP) is an emerging technology that indicates that the structure of an untrained network can serve as an excellent prior for image restoration. It bridges the gap between training-based and training-free methods and exhibits considerable potential in image compressed sensing (CS) reconstruction. In this article, we extend DIP and propose a novel Low-Rank Regularization Video Compressed Sensing Network for CS video reconstruction (dubbed LRR-VCSNet). We explore the application of a low-rank latent tensor with an untrained network for global low-rank regularization on video reconstruction, and the interframe low-rank approximation for framewise nonlocal low-rank regularization in the data space is also exploited. In addition, we design the structure of the untrained network based on the encoder-decoder architecture to improve the performance. Extensive experiments on six standard CIF video sequences show that LLR-VCSNet significantly outperforms traditional video CS methods and achieves competitive results when compared with the state-of-the-art training-based video CS method.
Loading