Keywords: self-supervised ultrasound texture
TL;DR: We introduce texture ultrasound semantic analysis (TUSA), a self-supervised model trained to decompose B-mode ultrasound into distinct texture channels across multiple datasets and organs.
Abstract: Ultrasound is an especially challenging modality to interpret because it requires unique domain knowledge to draw conclusions on the anatomy based on analysis of B-mode intensity. For this reason, there is great value in transforming B-mode images to a color scheme that is more closely aligned with the anatomy and its unique tissue properties like speed-of-sound and scattering coefficient to simplify ultrasound analysis. In this work, we introduce texture ultrasound semantic analysis (TUSA), a self-supervised transformer model trained to decompose B-mode ultrasound into distinct channels that are defined by the texture they represent. We train our model on 10 freely available ultrasound datasets and demonstrate superior segmentation performance and consistency compared to training on B-mode intensity on an additional 11th dataset. We conclude that by incorporating TUSA into the training pipeline, downstream models can focus on recognizing the anatomy instead of extracting features from intensity
Submission Number: 9
Loading