Keywords: mri, super-resolution, crop-equivariant, convolutional neural network
TL;DR: Crop-equivariant CNNs allow for test-time patch evaluation equivalent to full-size evaluation.
Abstract: Practical implementations of convolutional neural networks (CNNs) involve training and using models on the GPU. In order for this to work, memory must be allocated for not only the CNN’s weights but also all intermediate feature maps, the size of which depends on the input image size. Medical image data can routinely exceed the available GPU RAM. Additionally, CNNs cannot naively stitch together cropped inputs and expect identical results compared to the scenario when CNNs run on the full image. In this work, we propose an architectural design which allows test-time patch evaluation with identical results to full-slice evaluation. We call this property crop-equivariance and show its equivalence to scenarios where the entire image is loaded into vRAM. We showcase our approach in a self-super-resolution task on test data where we can compare test-time patch evaluation to full slice evaluation as well as on light-sheet fluorescence microscopy images that are too large to fit into vRAM. An added benefit of our approach is dramatically reduced network size and train-/test-times.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: both
Primary Subject Area: Image Synthesis
Secondary Subject Area: Application: Other
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
0 Replies
Loading