Keywords: Intrapartum Ultrasound, Landmark Detection, Semi-supervised Learning, Pseudo-labeling
Abstract: Accurate and reliable detection of anatomical landmarks in intrapartum ultrasound is a critical component of quantitative and objective assessment of fetal head descent, which plays an essential role in guiding clinical decision-making during labor. However, manual annotation of ultrasound images is time-consuming, requires expert knowledge, and suffers from inter-observer variability. Moreover, the scarcity of fully annotated datasets poses additional challenges for training high-performance deep learning models in this domain. To address these challenges, we propose a three-stage framework that effectively leverages both fully labeled and partially labeled data to improve landmark detection performance. In Stage 1, a TransUNet model is pre-trained on a large-scale video-derived segmentation dataset and iteratively fine-tuned on point-annotated images using an error-weighted loss strategy. Stage 2 incorporates high-confidence pseudo-labeled data generated by the refined model, with post-processing applied to ensure label quality. Stage 3 fuses predictions from three independently trained TransUNet models via averaging to enhance stability and robustness. Experimental results on the IUGC 2025 Landmark Detection Challenge test set demonstrate that our method achieves an Average Point Distance of 13.28 pixels and an AOP MAE of 3.87 degrees, demonstrating the effectiveness of semi-supervised learning and model ensembling for intrapartum ultrasound landmark detection.
Submission Number: 2
Loading