TransNAS-Bench-101: Improving Transferrability and Generalizability of Cross-Task Neural Architecture SearchDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Recent breakthroughs of Neural Architecture Search (NAS) are extending the field's research scope towards a broader range of vision tasks and more diversified search spaces. While existing NAS methods mostly design architectures on one single task, algorithms that look beyond single-task search are surging to pursue a more efficient and universal solution across various tasks. Many of them leverage transfer learning and seek to preserve, reuse, and refine network design knowledge to achieve higher efficiency in future tasks. However, the huge computational cost and experiment complexity of cross-task NAS are imposing barriers for valuable research in this direction. Existing transferrable NAS algorithms are also based on different settings, e.g., datasets and search spaces, which raises concerns on performance comparability. Although existing NAS benchmarks provided some solutions, they all focus on one single type of vision task, i.e., classification. In this work, we propose TransNAS-Bench-101, a benchmark containing network performance across 7 tasks, covering classification, regression, pixel-level prediction, and self-supervised tasks. This diversity provides opportunities to transfer NAS methods among the tasks, and allows for more complex transfer schemes to evolve. We explore two fundamentally different types of search spaces: cell-level search space and macro-level search space. With 7,352 backbones evaluated on 7 tasks, 51,464 trained models with detailed training information are provided. Generating this benchmark takes about 193,760 GPU hours, which is equivalent to 22.12 years of computation on a single Nvidia V100 GPU. Analysis of 4 benchmark transfer schemes highlights that: (1) Direct deployment of both architectures and policies can easily lead to negative transfer unless guided by carefully designed mechanisms. (2) Evolutionary methods' role in transferrable NAS might be overlooked in the past. (3) It is a valid challenge for NAS algorithms to perform well across tasks and search spaces consistently. We also provide our suggestions for future research along with the analysis. With TransNAS-Bench-101, we hope to encourage the advent of exceptional NAS algorithms that raise cross-task search efficiency and generalizability to the next level.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=1vgCgdi-w
5 Replies

Loading