Modeling Context With Linear Attention for Scalable Document-Level TranslationDownload PDF


05 Jun 2022, 15:36 (modified: 05 Jun 2022, 15:51)ACL ARR 2022 June Blind SubmissionReaders: Everyone
Abstract: Document-level machine translation leverages inter-sentence dependencies to produce more coherent and consistent translations. However, these models, predominantly based on transformers, are difficult to scale to long documents as their attention layers have quadratic complexity in the sequence length. Recent efforts on efficient attention improve scalability, but their effect on document translation remains unexplored. In this work, we investigate the efficacy of a recent linear attention model by Peng et al. (2021) on document translation and augment it with a sentential gate to promote a recency inductive bias. We evaluate the model on IWSLT 2015 and OpenSubtitles 2018 against the transformer, demonstrating substantially increased decoding speed on long sequences with similar or better BLEU scores. We show that sentential gating further improves translation quality on IWSLT.
Paper Type: short
Editor Reassignment: yes
Reviewer Reassignment: yes
0 Replies