NASiam: Efficient Representation Learning using Neural Architecture Search for Siamese NetworksDownload PDF

22 Sept 2022 (modified: 25 Nov 2024)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Neural Architecture Search, Self-Supervised Learning, Representation Learning, Siamese Networks, Computer Vision
TL;DR: A novel method improving Siamese Networks architecture using Neural Architecture Search.
Abstract: Siamese networks are one of the most trending methods to achieve unsupervised visual representation learning. Meanwhile, Neural Architecture Search (NAS) is becoming increasingly important as a technique to discover efficient deep learning architectures. In this article, we present NASiam, a novel approach that uses for the first time differentiable NAS to improve the Multilayer Perceptron projector and predictor (encoder/predictor pair) architectures inside Siamese networks frameworks while preserving the simplicity of previous baselines. We show that these new architectures allow backbone convolutional models to learn strong representations efficiently. NASiam reaches competitive performance in both small-scale (CIFAR) and large-scale (ImageNet) image classification datasets. We discuss the composition of the NAS-discovered architectures and emit hypotheses on why they manage to prevent collapsing behavior.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Unsupervised and Self-supervised learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/nasiam-efficient-representation-learning/code)
5 Replies

Loading