MLP-Mixer: An all-MLP Architecture for VisionDownload PDF

21 May 2021, 20:44 (modified: 15 Oct 2021, 13:10)NeurIPS 2021 PosterReaders: Everyone
Keywords: computer vision, image recognition, large-scale training, multi-layer perceptrons, transfer learning
TL;DR: MLP-Mixer is a new competitive architecture for computer vision that uses only basic matrix multiplication routines (MLPs).
Abstract: Convolutional Neural Networks (CNNs) are the go-to model for computer vision. Recently, attention-based networks, such as the Vision Transformer, have also become popular. In this paper we show that while convolutions and attention are both sufficient for good performance, neither of them are necessary. We present MLP-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs). MLP-Mixer contains two types of layers: one with MLPs applied independently to image patches (i.e. "mixing" the per-location features), and one with MLPs applied across patches (i.e. "mixing" spatial information). When trained on large datasets, or with modern regularization schemes, MLP-Mixer attains competitive scores on image classification benchmarks, with pre-training and inference cost comparable to state-of-the-art models. We hope that these results spark further research beyond the realms of well established CNNs and Transformers.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://github.com/google-research/vision_transformer
13 Replies

Loading