Keywords: Synthetic data generation, Virtual colonoscopy, Depth estimation
TL;DR: We propose a texture synthesis method to create a realistic-looking virtual colonoscopy.
Abstract: In virtual colonoscopy, computer vision techniques focus on depth estimation, photometric tracking, and simultaneous localization and mapping (SLAM). To narrow the domain gap between virtual and real colonoscopy data, it is necessary to utilize real-world data or employ realistic-looking virtual dataset. We introduce a texture synthesis and outpainting strategy using the Mask-aware-transformer. The method can generate textures for the inner surface suitable for virtual colonoscopy, including realistic-looking, controllable, and variety of synthesized textures. We generated RGB-D dataset employing the generated virtual colonoscopy, resulting in 9 video recordings. Each sequence was generated from distinct colon models, accumulating a total of 14,120 frames, paired with ground truth depth. Evaluating the generalizability across various datasets, the depth estimation model trained on our dataset exhibited superior transfer performance.
Submission Number: 65
Loading