Abstract: Ultrasound (US) images suffer from artefacts which limit its diagnostic value, notably acoustic shadow. Shadows are dependent on probe orientation, with each view giving a distinct, partial view of the anatomy. In this work, we fuse the partially imaged fetal head anatomy, acquired from numerous views, into a single coherent compounding of the full anatomy. Firstly, a stream of freehand 3D US images is acquired, capturing as many different views as possible. The imaged anatomy at each time-point is then independently aligned to a canonical pose using an iterative spatial transformer network (iSTN), making our approach robust to fast fetal and probe motion. Secondly, images are fused by averaging only the best (most salient) features from all images, producing a more detailed compounding. Finally, the compounding is iteratively refined using a groupwise registration approach. We evaluate our compounding approach quantitatively and qualitatively, comparing it with average compounding and individual US frames. We also evaluate our alignment accuracy using two physically attached probes, that capture separate views simultaneously, providing ground-truth. Lastly, we demonstrate the potential clinical impact of our method for assessing cranial, facial and external ear abnormalities, with automated atlas-based masking and 3D volume rendering.
0 Replies
Loading