{B}i-{S}im{C}ut: A Simple Strategy for Boosting Neural Machine TranslationDownload PDF

Anonymous

08 Mar 2022 (modified: 05 May 2023)NAACL 2022 Conference Blind SubmissionReaders: Everyone
Paper Link: https://openreview.net/forum?id=N-xKHVWBCEM
Paper Type: Long paper (up to eight pages of content + unlimited references and appendices)
Abstract: We introduce Bi-SimCut: a simple but effective training strategy to boost neural machine translation (NMT) performance. It consists of two procedures: bidirectional pretraining and unidirectional finetuning. Both procedures utilize SimCut, a simple regularization method that forces the consistency between the output distributions of the original and the cutoff sentence pairs. Without leveraging extra dataset via back-translation or integrating large-scale pretrained model, Bi-SimCut achieves strong translation performance across five translation benchmarks (data sizes range from 160K to 20.2M): BLEU scores of $31.16$ for $\texttt{en}\rightarrow\texttt{de}$ and $38.37$ for $\texttt{de}\rightarrow\texttt{en}$ on the IWSLT14 dataset, $30.78$ for $\texttt{en}\rightarrow\texttt{de}$ and $35.15$ for $\texttt{de}\rightarrow\texttt{en}$ on the WMT14 dataset, and $27.17$ for $\texttt{zh}\rightarrow\texttt{en}$ on the WMT17 dataset. SimCut is not a new method, but a version of Cutoff (Shen et al., 2020) simplified and adapted for NMT, and it could be considered as a perturbation-based method. Given the universality and simplicity of Bi-SimCut and SimCut, we believe they can serve as strong baselines for future NMT research.
Presentation Mode: This paper will be presented virtually
Virtual Presentation Timezone: UTC+8
Copyright Consent Signature (type Name Or NA If Not Transferrable): Pengzhi Gao
Copyright Consent Name And Address: Baidu Inc. No. 10, Shangdi 10th Street, Beijing, 100085, China
0 Replies

Loading