PixMIM: Rethinking Pixel Reconstruction in Masked Image Modeling

Published: 29 Jan 2024, Last Modified: 29 Jan 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Masked Image Modeling (MIM) has achieved promising progress with the advent of Masked Autoencoders (MAE) and BEiT. However, subsequent works have complicated the framework with new auxiliary tasks or extra pre-trained models, inevitably increasing computational overhead. This paper undertakes a fundamental analysis of MIM from the perspective of pixel reconstruction, which examines the input image patches and reconstruction target, and highlights two critical but previously overlooked bottlenecks. Based on this analysis, we propose a remarkably simple and effective method, PixMIM, that entails two strategies: 1) filtering the high-frequency components from the reconstruction target to de-emphasize the network's focus on texture-rich details and 2) adopting a conservative data transform strategy to alleviate the problem of missing foreground in MIM training. PixMIM can be easily integrated into most existing pixel-based MIM approaches (i.e., using raw images as reconstruction target) with negligible additional computation. Without bells and whistles, our method consistently improves four MIM approaches, MAE, MFF, ConvMAE, and LSMAE, across various downstream tasks. We believe this effective plug-and-play method will serve as a strong baseline for self-supervised learning and provide insights for future improvements of the MIM framework. Code and models will be available.
Submission Length: Long submission (more than 12 pages of main content)
Code: https://github.com/open-mmlab/mmselfsup/tree/main/configs/selfsup/pixmim
Assigned Action Editor: ~Mathieu_Salzmann1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1685
Loading