Adversarially Fine-tuned Self-Supervised Framework for Automated Landmark Detection in Intrapartum Ultrasound
Keywords: Adversarial Learning, Intrapartum Ultrasound, Landmark Detection, Self-Supervised Learning
TL;DR: We introduce a self-supervised, attention-augmented, and adversarially fine-tuned pipeline for automated landmark detection in intrapartum ultrasound enabling standardized and reliable labor monitoring.
Abstract: Accurate assessment of fetal head progression during labor is essential for guiding timely clinical interventions and improving maternal
fetal outcomes. The World Health Organization’s Labour Care Guide emphasizes standardized, evidence-based monitoring tools such as the Angle of Progression (AoP), derived from intrapartum ultrasound. However, current clinical practice relies on manual landmark annotation, which is labor-intensive and subject to variability. To address this limitation, we present a fully automated pipeline for anatomical landmark detection in intrapartum ultrasound as part of the Intrapartum Ultrasound Grand Challenge (IUGC) 2025. Our method combines (i) self-supervised pretraining on unlabeled standard plane ultrasound images to establish strong anatomical priors, (ii) an attention-enhanced decoder architecture for effective spatial localization, and (iii) adversarial fine-tuning using a PatchGAN-style discriminator to ensure anatomical plausibility and spatial precision. The model detects three key land
marks—two on the pubic symphysis and one on the fetal head—enabling robust AoP estimation. Our approach achieves a Mean Radial Error (MRE) of 25.66 pixels and an AoP Mean Absolute Error (MAE) of 8.54 degrees. These results highlight the potential of self-supervised learning and adversarially guided strategies to reduce observer variability, standardize labor monitoring, and support global initiatives for safer, more equitable intrapartum care.
Submission Number: 8
Loading