everyone
since 02 Jul 2024">EveryoneRevisionsBibTeXCC BY 4.0
Neural machine translation (NMT) quality significantly depends on large parallel corpora, making low-resource language translation a challenge. This paper introduces a novel approach that leverages cross-lingual alignment knowledge from multilingual pre-trained language models (PLMs) to enhance low-resource NMT. Our method segments the translation model into source encoding, target encoding, and alignment modules, each initialized with different pre-trained BERT models. Experiments on four translation directions with two low-resource language pairs demonstrate significant BLEU score improvements, validating the efficacy of our approach.