Keywords: neural machine translation, transformer, sequence to sequence pre-training
Abstract: Sequence to sequence (\textit{seq2seq}) pre-training has achieved predominate success in natural language generation (NLG). Generally, the powerful encoding and language generation capacities from the pre-trained seq2seq models can significantly improve most NLG tasks when fine-tuning them with task-specific data. However, as a cross-lingual generation task, machine translation needs an additional ability of representation transferring on languages (or \textit{translation model}). Fine-tuning the pre-trained models to learn the translation model, which is not covered in the self-supervised processing, will lead to the \textit{catastrophic forgetting} problem. This paper presents a dual-channel recombination framework for translation (\textsc{DcRT}) to address the abovementioned problem. In the proposed approach, we incorporate two cross-attention networks into the pre-trained seq2seq model to fetch the contextual information and require them to learn the \textit{translation} and \textit{language} models, respectively. Then, the model generates outputs according to the composite representation. Experimental results on multiple translation tasks demonstrate that the proposed \textsc{DcRT} achieves considerable improvements compared to several strong baselines by tuning less than 20\% parameters. Further, \textsc{DcRT} can incorporate multiple translation tasks into one model without dropping performance, drastically reducing computation and storage consumption.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
5 Replies
Loading