Multi-view Cross-Modality MR Image Translation for Vestibular Schwannoma and Cochlea Segmentation

Published: 01 Jan 2022, Last Modified: 11 Apr 2025BrainLes@MICCAI (2) 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this work, we propose a multi-view image translation framework, which can translate contrast-enhanced \(\text {T}_1\) (ce\(\text {T}_1\)) MR imaging to high-resolution \(\text {T}_2\) (hr\(\text {T}_2\)) MR imaging for unsupervised vestibular schwannoma and cochlea segmentation. We adopt two image translation models in parallel that use a pixel-level consistent constraint and a patch-level contrastive constraint, respectively. Thereby, we can augment pseudo-hr\(\text {T}_2\) images reflecting different perspectives, which eventually lead to a high-performing segmentation model. Our experimental results on the CrossMoDA challenge show that the proposed method achieved enhanced performance on the vestibular schwannoma and cochlea segmentation.
Loading