Parallel Mechanism Decoders in Pretrained Language Model-based Neural Machine TranslationDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Pre-trained language models (PLMs) have demonstrated their effectiveness in enhancing neural machine translation (NMT) tasks. While researchers have made numerous attempts to enhance the encoder, however, in decoder enhancement, the existing method neglects intra-layer information fusion, potentially resulting in the underutilization of encoder information. In this paper, we propose a model featuring a parallel mechanism decoder, facilitating the integration of PLM enhancements and enabling multi-granularity information fusion in the decoder. We evaluate our proposed method on the IWSLT14 De-En task and obtain significant improvements in model performance with tiny modifications.
Paper Type: short
Research Area: Machine Translation
Contribution Types: NLP engineering experiment
Languages Studied: English,German
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview