Contrastive Learning of Relative Position Regression for One-Shot Object Localization in 3D Medical ImagesOpen Website

Published: 01 Jan 2021, Last Modified: 12 May 2023MICCAI (2) 2021Readers: Everyone
Abstract: Deep learning networks have shown promising performance for object localization in medical images, but require large amount of annotated data for supervised training. To address this problem, we propose: 1) A novel contrastive learning method which embeds the anatomical structure by predicting the Relative Position Regression (RPR) between any two patches from the same volume; 2) An one-shot framework for organ and landmark localization in volumetric medical images. Our main idea comes from that tissues and organs from different human bodies own similar relative position and context. Therefore, we could predict the relative positions of their non-local patches, thus locate the target organ. Our one-shot localization framework is composed of three parts: 1) A deep network trained to project the input patch into a 3D latent vector, representing its anatomical position; 2) A coarse-to-fine framework contains two projection networks, providing more accurate localization of the target; 3) Based on the coarse-to-fine model, we transfer the organ bounding-box (B-box) detection to locating six extreme points along x, y and z directions in the query volume. Experiments on multi-organ localization from head-and-neck (HaN) and abdominal CT volumes showed that our method acquired competitive performance in real time, which is more accurate and $$10^5$$ times faster than template matching methods with the same setting for one-shot localization in 3D medical images. Code is available at https://github.com/HiLab-git/RPR-Loc .
0 Replies

Loading