Pay Attention to MLPsDownload PDF

21 May 2021, 20:48 (edited 26 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Transformer, Attention, MLP
  • TL;DR: A simple variant of MLP that works well for key applications that Transformers are good at: BERT for NLP and ViT for vision
  • Abstract: Transformers have become one of the most important architectural innovations in deep learning and have enabled many breakthroughs over the past few years. Here we propose a simple network architecture, gMLP, based solely on MLPs with gating, and show that it can perform as well as Transformers in key language and vision applications. Our comparisons show that self-attention is not critical for Vision Transformers, as gMLP can achieve the same accuracy. For BERT, our model achieves parity with Transformers on pretraining perplexity and is better on some downstream NLP tasks. On finetuning tasks where gMLP performs worse, making the gMLP model substantially larger can close the gap with Transformers. In general, our experiments show that gMLP can scale as well as Transformers over increased data and compute.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
16 Replies

Loading