Improving End-to-end Speech Translation by Leveraging Auxiliary Speech and Text DataDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: We present a method for introducing a text encoder into pre-training end-to-end speech translation systems. It enhances the ability of adapting one modality (i.e., source-language speech) to another (i.e., source-language text). Thus, the speech translation model can learn from both unlabeled and labeled data, especially when the source-language text data is abundant. Beyond this, we present a denoising method for a robust text encoder that can deal with both normal and noisy text data. Our system sets new state-of-the-art on the MuST-C En-De, En-Fr, and LibriSpeech En-Fr tasks.
0 Replies

Loading