Abstract: Deep network-based image Compressive Sensing (CS) has attracted much attention in recent years. However, there still exist the following two issues: 1) Existing methods typically use fixed-scale sampling, which leads to limited insights into the image content. 2) Most pre-trained models can only handle fixed sampling rates and fixed block scales, which restricts the scalability of the model. In this paper, we propose a novel scale-aware scalable CS network (dubbed S2-CSNet), which achieves scale-aware adaptive sampling, fine granular scalability and high-quality reconstruction with one single model. Specifically, to enhance the scalability of the model, a structural sampling matrix with a predefined order is first designed, which is a universal sampling matrix that can sample multi-scale image blocks with arbitrary sampling rates. Then, based on the universal sampling matrix, a distortion-guided scale-aware scheme is presented to achieve scale-variable adaptive sampling, which predicts the reconstruction distortion at different sampling scales from the measurements and select the optimal division scale for sampling. Furthermore, a multi-scale hierarchical sub-network under a well-defined compact framework is put forward to reconstruct the image. In the multi-scale feature domain of the sub-network, a dual spatial attention is developed to explore the local and global affinities between dense feature representations for deep fusion. Extensive experiments manifest that the proposed S2-CSNet outperforms existing state-of-the-art CS methods.
Loading