Context Aware Convolutional Neural Networks for Segmentation of Aortic Dissection

Jan 25, 2020 Blind Submission readers: everyone Show Bibtex
  • Keywords: aortic dissection, segmentation, convolutional neural networks, deep learning, 3D reconstruction
  • TL;DR: We introduced a variation on the common U-net architecture, where stacks of slices are inputted into the network instead of individual 2D slices, allowing the network to better learn from neighboring slices.
  • Track: full conference paper
  • Paper Type: both
  • Abstract: Three-dimensional (3D) reconstruction of patient-specific arteries is necessary for a variety of medical and engineering fields, such as surgical planning and physiological modeling. These geometries are created by segmenting and stacking hundreds (or thousands) of two-dimensional (2D) slices from a patient scan to form a composite 3D structure. However, this process is typically laborious and can take hours to fully segment each scan. Convolutional neural networks (CNNs) offer an attractive alternative to reduce the burden of manual segmentation, allowing researchers to reconstruct 3D geometries in a fraction of the time. We focused this work specifically on Stanford type B aortic dissection (TBAD), characterized by a tear in the descending aortic wall that creates two channels of blood flow: a normal channel called a true lumen and a pathologic new channel within the wall called a false lumen. While significant work has been dedicated to automated aortic segmentations, TBAD segmentations present unique challenges due to their irregular shapes, the need to distinguish between the two lumens, and patient to patient variability in the false lumen contrast. Here, we introduced a variation on the U-net architecture, where small stacks of slices are inputted into the network instead of individual 2D slices. This allowed the network to take advantage of contextual information present within neighboring slices. We compared and evaluated this variation with a variety of standard CNN segmentation architectures and found that our stacked input structure significantly improved segmentation accuracy for both the true and false lumen by more than ~12%. The resulting segmentations allowed for more accurate 3D reconstructions which closely matched our manual results.
  • Supplementary Material:  zip
0 Replies

Loading