Multiplicative Position-aware Transformer Models for Language UnderstandingDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: In order to utilize positional ordering information in transformer models, various flavors of absolute and relative position embeddings have been proposed. However, there is no comprehensive comparison of position embedding methods in the literature. In this paper, we review existing position embedding methods and compare their accuracy on downstream NLP tasks, using our own implementations. We also propose a novel multiplicative embedding method which leads to superior accuracy when compared to existing methods. Finally, we show that our proposed embedding method, served as a drop-in replacement of the default absolute position embedding, can improve the RoBERTa-base and RoBERTa-large models on SQuAD1.1 and SQuAD2.0 datasets.
Paper Type: short
0 Replies

Loading