A Progressively Prompt-guided Model for Sparse-View CT Reconstruction

Published: 2024, Last Modified: 05 Mar 2025BIBM 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: While sparse-view Computed Tomography (CT) has a remarkable impact on reducing ionizing radiation dose while accelerating data acquisition, the reconstructed images have been compromised by streak-like artifacts, affecting clinical diagnostics. By integrating powerful regularization with deep learning technologies into iterative reconstruction algorithms, the deep-unrolling-based methods have achieved promising results in terms of reconstruction quality and theoretical interpretability. However, leading methods always focus on learning powerful content priors with diverse technologies and ignoring the latent noise distribution prior in the image domain, thereby limiting the ability of structure-preserving and detail reconstructing of the model. To alleviate this problem, we propose a Progressively Prompt-guided Model (shorted by PPM) for sparse-view CT reconstruction. Specifically, we inject the idea of prompt learning into an iterative unrolled neural network, in which a learnable prompt module is inserted into each unrolled block to perceive image content and noise distribution in a self-adaptive manner, which leads to the more powerful priors to guide high-quality CT image reconstruction. Furthermore, we construct a progressively guiding strategy to facilitate high-quality prompt generation while speeding model convergence. Extensive experiments demonstrate that our PPM achieves state-of-the-art performance in artifact suppression, structure fidelity, and visual perception similarity. The code is available at https://github.com/Wenchao-Du/PPM/.
Loading