Keywords: Deep learning, Intravascular ultrasound, Wavelets, Scattering transformation, Segmentation, Attention, Small data set, Calcifications, Lumen, Vessel wall
TL;DR: Incorporating scattering transforms via an attention block into CNNs to improve segmentation results on small intravascular ultrasound data sets.
Abstract: Using intracoronary imaging modalities like intravascular ultrasound (IVUS) has a positive impact on the results of percutaneous coronary interventions. Efficient extraction of important vessel metrics like lumen diameter, vessel wall thickness or plaque burden via automatic segmentation of IVUS images can improve the clinical workflow. State-of-the-art segmentation results are usually achieved by data-driven methods like convolutional neural networks (CNNs). However, clinical data sets are often rather small leading to extraction of image features which are not very meaningful and thus decreasing performance. This is also the case for some applications which inherently allow for only small amounts of available data, e.g., detection of diseases with extremely small prevalence or online-adaptation of an existing algorithm to individual patients. In this work we investigate how integrating scattering transformations - as special forms of wavelet transformations - into CNNs could improve the extraction of meaningful features. To this end, we developed a novel network module which uses features of a scattering transform for an attention mechanism. We observed that this approach improves the results of calcium segmentation up to 8.2 % (relatively) in terms of the Dice coefficient and 24.8 % in terms of the modified Hausdorff distance. In the case of lumen and vessel wall segmentation, the improvements are up to 2.3 % (relatively) in terms of the Dice coefficient and 30.8 % in terms of the modified Hausdorff distance. Incorporating scattering transformations as a component of an attention block into CNNs improves the segmentation results on small IVUS segmentation data sets. In general, scattering transformations can help in situations where efficient feature extractors can not be learned via the training data. This makes our attention module an interesting candidate for applications like few-shot learning for patient adaptation or detection of rare diseases.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: methodological development
Source Latex: zip
Primary Subject Area: Learning with Noisy Labels and Limited Data
Secondary Subject Area: Segmentation