D2GAN: A Dual-Domain Generative Adversarial Network for High-Quality PET Image Reconstruction

Published: 01 Jan 2024, Last Modified: 27 May 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Positron emission tomography (PET) is a widely adopted nuclear imaging technique for early tumor detection and brain disorder diagnosis, while its intrinsic tracer radiation inevitably poses health risks for patients. Recently, to achieve high-quality PET imaging while reducing radiation exposure, numerous methods have been proposed to reconstruct high-quality standard-dose PET (SPET) images from low-dose PET (LPET) images. However, these methods usually overlooked crucial regions and details during the reconstruction, leading to high-frequency distortions in the reconstructed images. To this end, we propose D2GAN, a dual-domain generative adversarial network that utilizes spatial and frequency domain information to mitigate high-frequency disparities, facilitating high-quality PET reconstruction. The core of our approach is the Dual-Domain Learning Block (DLB), comprising a Spatial Domain Learning Block (SDLB) for identifying key regions and details in PET images, and a Frequency Domain Learning Block (FDLB) to further refine these areas by amplifying the high-frequency signals of the image. In addition, we introduce a multi-scale residual block (MSRB) to efficiently extract features at various scales and incorporate a focal frequency loss to encourage the consistency between the reconstructed and the real SPET images in the frequency domain. The DLBs and MSRBs are embedded into a U-shaped structure to form our generator. Furthermore, we apply a patch-based discriminator to enforce the data distribution consistency of the reconstructed PET images. Extensive experiments on two public datasets and an in-house clinical dataset demonstrate that our approach outperforms the state-of-the-art PET reconstruction methods.
Loading