EIT: Enhanced Interactive Transformer for Sequence GenerationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Transformer, Multi-head self-attention, Sequence Generation, Machine Translation
Abstract: In this work, we tackle the head degradation problem in attention. We propose an \textbf{E}nhanced \textbf{I}nteractive \textbf{T}ransformer (\textsc{Eit}) architecture in which the standard multi-head self-attention is replaced with the enhanced multi-head attention (EMHA). EMHA removes the one-to-one mapping constraint among queries and keys in multiple subspaces and allows each query to attend to multiple keys. On top of that, we develop a method to make full use of many-to-many mapping by introducing two interaction models, namely Inner-Subspace Interaction and Cross-Subspace Interaction. Extensive experiments on a wide range of sequence generation tasks (e.g. machine translation, abstractive summarization and grammar correction), show its superiority, with a very modest increase in model size.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
Supplementary Material: zip
13 Replies

Loading