Abstract: As the demand for shared pre-trained deep neural network models continues to rise, safeguarding the intellectual property of models is also increasingly significant. While existing studies predominantly concentrate on protecting pre-trained image recognition models, limited research covers pre-trained speech recognition models. In this paper, we propose the black-box watermark method to authenticate the ownership of speech recognition models. This method can mitigate the risk of unauthorized AI services being created by attackers who gain access to the pre-trained model. Accordingly, we present three watermarking methods: Gaussian noise watermark, extreme frequency Gaussian noise watermark, and unrelated audio watermark. These generated watermarks, embedded into models through training or fine-tuning, exhibit remarkable fidelity and effectiveness, backed by rigorous experimental validation. Furthermore, our experiments reveal that the extreme frequency noise backdoor enhances the robustness of the watermark compared to the Gaussian noise and unrelated audio watermark.
Loading