Memory-efficient Learning for Large-scale Computational ImagingDownload PDF

Published: 21 Oct 2019, Last Modified: 17 Nov 2023NeurIPS 2019 Deep Inverse Workshop PosterReaders: Everyone
Keywords: Computational Imaging, Backpropagation, Learning, Reverse-mode differentiation, microscopy, magnetic resonance imaging
TL;DR: We propose a memory-efficient learning procedure that exploits the reversibility of the network’s layers to enable data-driven design for large-scale computational imaging.
Abstract: Computational imaging systems jointly design computation and hardware to retrieve information which is not traditionally accessible with standard imaging systems. Recently, critical aspects such as experimental design and image priors are optimized through deep neural networks formed by the unrolled iterations of classical physics-based reconstructions (termed physics-based networks). However, for real-world large-scale systems, computing gradients via backpropagation restricts learning due to memory limitations of graphical processing units. In this work, we propose a memory-efficient learning procedure that exploits the reversibility of the network’s layers to enable data-driven design for large-scale computational imaging. We demonstrate our methods practicality on two large-scale systems: super-resolution optical microscopy and multi-channel magnetic resonance imaging.
1 Reply

Loading