Abstract: This paper delves into the adaptation of large vision models, initially pretrained on extensive natural image datasets, for the analysis of physical signals. Inspired by their remarkable generalization and transferability across diverse computer vision tasks, we leverage the large vision models for analyzing ultrasonic wavefield images captured via physics-based imaging, which exhibit different characteristics from natural images. To bridge the substantial gap between the original domain of vast natural images and the target wavefield patterns, we introduce the wavefield MAE model featured with a two-stage adaptation process: self-supervised learning using a reference wavefield image dataset, followed by finetuning to the wavefield classification task. The proposed scheme progressively refines (visual) feature representation toward wavefield patterns by consolidating tripartite information of 1) natural image patterns in pretraining, 2) Simulated wavefield data that mirror physics principles of wave propagation, and 3) real wavefield data of ultrasonic testing (UT). Comprehensive experimental comparisons have validated the proposed feature adaptation scheme. Notably, the proposed scheme is not confined to ultrasonic wavefield analysis but has a broad reach in general topics of adapting the pretrained models for specific tasks with limited data availability
Loading