Image-to-Image MLP-mixer for Image ReconstructionDownload PDF

29 Sept 2021 (modified: 22 Oct 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: MLP-mixer, image reconstruction, denoising, compressive sensing
Abstract: Neural networks are highly effective tools for image reconstruction problems such as denoising and compressive sensing. Neural networks for image reconstruction tasks to date are almost exclusively convolutional networks. The most popular architecture is the U-net, a convolutional network with multi-resolution architecture. In this work, we show that a simple network based on the multi-layer perceptron (MLP)-mixer enables state-of-the art image reconstruction performance without convolutions and without a multi-resolution architecture. Similar to the original MLP-mixer, the image-to-image MLP-mixer is based exclusively on MLPs operating on linearly-transformed image patches. Contrary to the MLP-mixer, we incorporate structure by retaining the relative positions of the image patches. This imposes an inductive bias towards natural images which enables the image-to-image MLP-mixer to learn to denoise images based on relatively few examples. The image-to-image MLP-mixer requires fewer parameters to achieve the same denoising performance than the U-net and its parameters scale linearly in the image resolution instead of quadratically as for the original MLP-mixer. If trained on a moderate amount of examples for denoising, the image-to-image MLP-mixer outperforms the U-net by a slight margin. It also outperforms the vision transformer tailored for image reconstruction and classical un-trained methods such as BM3D.
One-sentence Summary: We show that a simple network based on the Multi-Layer Perceptron (MLP)-mixer enables state-of-the-art image reconstruction performance without convolutions and without a multi-resolution architecture.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2202.02018/code)
13 Replies

Loading