Faster Transformer Decoding: N-gram Masked Self-AttentionDownload PDFOpen Website

Published: 01 Jan 2020, Last Modified: 03 Apr 2024CoRR 2020Readers: Everyone
Abstract: Motivated by the fact that most of the information relevant to the prediction of target tokens is drawn from the source sentence $S=s_1, \ldots, s_S$, we propose truncating the target-side window used for computing self-attention by making an $N$-gram assumption. Experiments on WMT EnDe and EnFr data sets show that the $N$-gram masked self-attention model loses very little in BLEU score for $N$ values in the range $4, \ldots, 8$, depending on the task.
0 Replies

Loading