SCRED-Distillation: Improving Low-Dose CT Image Quality via Feature Fusion and Mutual Learning

Yanqing Wang, Xinru Zhan, Wanquan Liu, Yingying Li, Kexin Guo, Huafeng Wang

Published: 01 Jan 2025, Last Modified: 05 Nov 2025IEEE AccessEveryoneRevisionsCC BY-SA 4.0
Abstract: The substantial noise inherent in low-dose CT (LDCT) significantly impedes diagnostic accuracy. Although deep learning techniques, particularly CNNs, have offered promise for LDCT denoising, their inherent focus on local features and the scarcity of extensive training data can limit their performance and ability to generalize effectively. To address these critical shortcomings, we introduce SCRED-Distillation, a novel denoising method. This approach synergistically integrates the global contextual awareness of Transformer architectures with the efficiency and regularization benefits of Knowledge Distillation. By effectively leveraging both local and global image characteristics, SCRED-Distillation achieves demonstrably superior denoising results. Furthermore, to enhance the model’s capacity for robust generalization across diverse datasets, we employ a mutual learning framework during training. Extensive quantitative evaluations conducted on the challenging Mayo Clinic LDCT Grand Challenge dataset reveal remarkable improvements in key image quality metrics: the Peak Signal-to-Noise Ratio (PSNR) increased significantly from 29.2489 to 33.2103, the Structural Similarity Index Measure (SSIM) steadily rose from 0.8759 to 0.9132, and the Root Mean Squared Error (RMSE) was effectively reduced from 14.2416 to 8.9377. Notably, SCRED-Distillation effectively suppresses noise artifacts while crucially preserving fine diagnostic details, leading to clearer and more reliable medical images and ultimately facilitating more accurate clinical diagnoses.
Loading