Mutually Improved Endoscopic Image Synthesis and Landmark Detection in Unpaired Image-to-Image Translation

Abstract: The CycleGAN framework allows for unsupervised image-to-image translation of unpaired data. In a scenario of surgical training on a physical surgical simulator, this method can be used to transform endoscopic images of phantoms into images which more closely resemble the intra-operative appearance of the same surgical target structure. This can be viewed as a novel augmented reality approach, which we coined <i>Hyperrealism</i> in previous work. In this use case, it is of paramount importance to display objects like needles, sutures or instruments consistent in both domains while altering the style to a more tissue-like appearance. Segmentation of these objects would allow for a direct transfer, however, contouring of these, partly tiny and thin foreground objects is cumbersome and perhaps inaccurate. Instead, we propose to use landmark detection on the points when sutures pass into the tissue. This objective is directly incorporated into a CycleGAN framework by treating the performance of pre-trained detector models as an additional optimization goal. We show that a task defined on these sparse landmark labels improves consistency of synthesis by the generator network in both domains. Comparing a baseline CycleGAN architecture to our proposed extension (<i>DetCycleGAN</i>), mean precision (PPV) improved by <inline-formula><tex-math notation="LaTeX">$+61.32$</tex-math></inline-formula>, mean sensitivity (TPR) by <inline-formula><tex-math notation="LaTeX">$+37.91$</tex-math></inline-formula>, and mean <inline-formula><tex-math notation="LaTeX">$F_1$</tex-math></inline-formula> score by <inline-formula><tex-math notation="LaTeX">$+0.4743$</tex-math></inline-formula>. Furthermore, it could be shown that by dataset fusion, generated intra-operative images can be leveraged as additional training data for the detection network itself.
0 Replies
Loading