Deformation-Compensated Learning for Image Reconstruction Without Ground TruthDownload PDFOpen Website

2022 (modified: 09 Nov 2022)IEEE Trans. Medical Imaging 2022Readers: Everyone
Abstract: Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Noise2Noise (N2N)</i> has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">deformation-compensated learning (DeCoLearn)</i> method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">magnetic resonance imaging (MRI)</i> data and show that it significantly improves imaging quality.
0 Replies

Loading