Angular Super-Resolution in Diffusion MRI with a 3D Recurrent Convolutional AutoencoderDownload PDF

Published: 28 Feb 2022, Last Modified: 07 Apr 2024MIDL 2022Readers: Everyone
Keywords: Diffusion MRI, Deep Learning, Angular super-resolution, Recurrent CNN, Image Synthesis
TL;DR: We construct a 3D Recurrent CNN architecture to perform super angular resolution on dMRI data.
Abstract: High resolution diffusion MRI (dMRI) data is often constrained by limited scanning time in clinical settings, thus restricting the use of downstream analysis techniques that would otherwise be available. In this work we develop a 3D recurrent convolutional neural network (RCNN) capable of super-resolving dMRI volumes in the angular (q-space) domain. Our approach formulates the task of angular super-resolution as a patch-wise regression using a 3D autoencoder conditioned on target b-vectors. Within the network we use a convolutional long short term memory (ConvLSTM) cell to model the relationship between q-space samples. We compare model performance against a baseline spherical harmonic interpolation and a 1D variant of the model architecture. We show that the 3D model has the lowest error rates across different subsampling schemes and b-values. The relative performance of the 3D RCNN is greatest in the very low angular resolution domain. Code for this project is available at
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: methodological development
Primary Subject Area: Image Synthesis
Secondary Subject Area: Image Acquisition and Reconstruction
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
Code And Data: code for this project can be found at The HCP data used to train and validate results are available through the HCP at
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](
5 Replies