Image Synthesis with Generative Adversarial Networks to Augment Tool Detection in MicrosurgeryDownload PDF

10 Dec 2021 (modified: 16 May 2023)Submitted to MIDL 2022Readers: Everyone
Keywords: Surgical tool detection, AI-assisted surgery, microsurgery, generative adversarial networks, StyleGAN, data augmentation, limited data, nearest neighbors.
TL;DR: A novel application and investigation into the potentials of generative adversarial networks (GANs) for synthesizing realistic images from limited training sets of microsurgical training tasks and real neurosurgical procedures
Abstract: Deep learning-based computer vision applications in surgery tend to lack sufficiently rich training data. This shortcoming leads to models that often perform well only in a limited number of surgical settings. In this paper, for the first time, we investigate whether Generative Adversarial Networks (GAN) can create realistic images of microsurgical procedures to augment training data for surgical tool detection and other computer vision applications. We employ video recordings from microsurgical training sessions and from real neurosurgical procedures to train and evaluate two recent GAN models: StyleGAN2 with Adaptive Discriminator Augmentation (ADA) and StyleGAN2 with Differential Augmentation (DiffAugment) as they have shown promising results in high-resolution realistic image generation in various applications. The models were trained with limited data for both unconditional and conditional image generation, where the conditional models generated images with and without tools to augment the background scenes and tissues. The resulting synthetic images were assessed using Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and expert evaluation by a neurosurgeon. Our results show that unconditional models achieved better scores than conditional models and the gap between the two depended on the datasets. Furthermore, the best FID scores, achieved for a bimanual handling practice task, were equal to 42.16 and 25.17 and comparable to scores of benchmarked experiments conducted for StyleGAN2-DiffAug. Visual inspection showed that while the synthetic images had faults that exposed their true origin to the human eye, sizable portion of them nonetheless included identifiable surgical instruments. Analysis of nearest neighbors in the pixel space indicated the ability of trained networks in generating new samples specially for real neurosurgical procedures. In the future, synthetic surgical images will be used for GAN-based data augmentation for computational instrument detection in microsurgery.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: validation/application paper
Primary Subject Area: Image Synthesis
Secondary Subject Area: Learning with Noisy Labels and Limited Data
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
0 Replies

Loading