Generation of 3D Cardiovascular Ultrasound Labeled Data via Deep LearningDownload PDF

06 Apr 2021 (modified: 16 May 2023)Submitted to MIDL 2021Readers: Everyone
Abstract: We propose an image generation pipeline to synthesize 3D echocardiographic images with corresponding ground-truth labels, to alleviate the need for data and for laborious and error-prone human labeling of images for subsequent Deep Learning (DL) tasks. The proposed method relies on detailed anatomical models, obtained from CT, of the heart as ground-truth label sources. These models are used to extract labeled slices which, together with a second dataset made of real 3D echocardiographic images, allow to train a Generative Adversarial Network (GAN) - namely, a CycleGAN - to synthesize realistic 3D cardiovascular ultrasound images that are paired with ground-truth labels. A qualitative analysis of the synthesized images showed that the main structures of the heart are well delineated and closely follow the labels from the anatomical models, making it possible to use these 3D echocardiographic images and paired labels for training new DL 3D segmentation or landmark detection models.
Paper Type: both
Primary Subject Area: Image Synthesis
Secondary Subject Area: Application: Other
Paper Status: original work, not submitted yet
Source Code Url: The short paper is based on original work not submitted yet. The source code can't be shared at this point.
Data Set Url: Under GDPR the datasets used in this project are not available for public use.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
4 Replies

Loading