Union-Domain Knowledge Distillation for Underwater Acoustic Target Recognition

Published: 01 Jan 2025, Last Modified: 09 Apr 2025IEEE Trans. Geosci. Remote. Sens. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Underwater acoustic target recognition (UATR) can be significantly empowered by advancements in deep learning (DL). However, the effectiveness of DL-based UATR methods is often constrained by the limited computing resources available on underwater platforms. Most of the existing knowledge distillation (KD) strategies try to build lightweight DL models, but these strategies rarely consider the acoustic properties of underwater environments, making them less efficient for UATR tasks. Thus, fully harnessing the potential of DL techniques while ensuring the model’s practicality, is one of the urgent problems to be solved in UATR research. In this work, we introduce the union-domain KD (UDKD) to establish an accurate and lightweight UATR model. UDKD integrates two KD strategies: dual-frequency band distillation (DBD) and cross-domain masked distillation (CMD). DBD improves the learning process for a simple student model by decoupling the knowledge of spectrograms into the local structural (i.e., line spectra) and global composition (i.e., propagation patterns) aspects. CMD reduces redundant information from the Fourier Transform process, enabling the student model to concentrate on essential signal elements and to learn underlying time–frequency distribution. Extensive experiments on two real-world oceanic datasets confirm the superior performance of UDKD compared to existing KD methods, i.e., achieving an accuracy of 94.81% ( $\uparrow ~3.19$ % versus 91.62%). Notably, UDKD showcases a 10.5% improvement in the prediction accuracy of the lightweight student model.
Loading