Greedy Learning for Large-Scale Neural MRI ReconstructionDownload PDF

Published: 19 Oct 2021, Last Modified: 05 May 2023NeurIPS 2021 Deep Inverse Workshop OralReaders: Everyone
Keywords: Greedy Learning, Memory Efficiency, MRI Reconstruction, Model-Based Networks
TL;DR: We propose greedy learning for model-based networks in MRI reconstruction, which reduces the memory footprint 6-fold during training compared to backpropagation while preserving generalization performance where compute time remains almost the same.
Abstract: Model-based deep learning approaches have recently shown state-of-the-art performance for accelerated MRI reconstruction. These methods unroll iterative proximal gradient descent by alternating between data-consistency and a neural-network based proximal operation. However, they demand several unrolled iterations with sufficiently expressive proximals for high resolution and multi-dimensional imaging (e.g., 3D MRI). This impedes traditional training via backpropagation due to prohibitively intensive memory and compute needed to calculate gradients and store intermediate activations per layer. To address this challenge, we advocate an alternative training method by greedily relaxing the objective. We split the end-to-end network into decoupled network modules, and optimize each network module separately, thereby avoiding the need to compute costly end-to-end gradients. We empirically demonstrate that the proposed greedy learning method requires 6x less memory with no additional computations, while generalizing slightly better than backpropagation.
Conference Poster: pdf
1 Reply

Loading