Extreme Masking for Learning Instance and Distributed Visual Representations

Published: 13 Mar 2023, Last Modified: 13 Mar 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: The paper presents a scalable approach for learning spatially distributed visual representations over individual tokens and a holistic instance representation simultaneously. We use self-attention blocks to represent spatially distributed tokens, followed by cross-attention blocks to aggregate the holistic instance. The core of the approach is the use of extremely large token masking (75\%-90\%) as the data augmentation for supervision. Our model, named ExtreMA, follows the plain BYOL approach where the instance representation from the unmasked subset is trained to predict that from the intact input. Instead of encouraging invariance across inputs, learning requires the model to capture informative variations in an image. The paper makes three contributions: 1) It presents random masking as a strong and computationally efficient data augmentation for siamese representation learning. 2) With multiple sampling per instance, extreme masking greatly speeds up learning and improves performance with more data. 3) ExtreMA obtains stronger linear probing performance than masked modeling methods, and better transfer performance than prior contrastive models.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: revision 1 * broader impacts and discussions on limitations * failure example visualizations of reconstructions in the appendix * minor revision changes revision 2 * add notes about performance variance. * Appendix B and Table 12 for multi-masking performance efficiency. revision 3 * add discussions to data2vec * fix various writing/presentation issues from reviewer xj2Y revision 4 * incorporate revision suggestions from reviewer g5tf revision 5 * add MSN results revision 6 * detailed discussion to data2vec
Code: https://github.com/microsoft/ExtreMA
Assigned Action Editor: ~David_Fouhey2
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 620
Loading