ScaleNAS: Multi-Path One-Shot NAS for Scale-Aware High-Resolution RepresentationDownload PDF

Published: 16 May 2022, Last Modified: 05 May 2023AutoML-Conf 2022 (Main Track)Readers: Everyone
Abstract: Scale variance among different sizes of body parts and objects is a challenging problem for visual recognition tasks. Existing works usually design dedicated backbone or apply Neural architecture Search (NAS) for each task to tackle this challenge. However, existing works impose significant limitations on the design or search space. To solve these problems, we present ScaleNAS, a one-shot learning method for exploring scale-aware representations. ScaleNAS solves multiple tasks at a time by searching multi-scale feature aggregation. ScaleNAS adopts a flexible search space that allows an arbitrary number of blocks and cross-scale feature fusions. To cope with the high search cost incurred by the flexible space, ScaleNAS employs one-shot learning for multi-scale supernet driven by grouped sampling and evolutionary search. Without further retraining, ScaleNet can be directly deployed for different visual recognition tasks with superior performance. We use ScaleNAS to create high-resolution models for two different tasks, ScaleNet-P for human pose estimation and ScaleNet-S for semantic segmentation. ScaleNet-P and ScaleNet-S outperform existing manually crafted and NAS-based methods in both tasks. When applying ScaleNet-P to bottom-up human pose estimation, it surpasses the state-of-the-art HigherHRNet. In particular, ScaleNet-P4 achieves 71.6% AP on COCO test-dev, achieving new state-of-the-art result.
Keywords: automatic machine learning, neural architecture search, evolutionary search
One-sentence Summary: The authors present a one-shot based search method and novel search space for multi-scale neural architectures
Track: Main track
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Hsin-Pai Cheng (dave.cheng@duke.edu)
Main Paper And Supplementary Material: pdf
Code And Dataset Supplement: zip
Steps For Environmental Footprint Reduction During Development: Fine-tuning the performance of Neural Architecture Search crafted model is computation consuming which may cause potential environmental impact (e.g., carbon footprint and global warming). Our work vastly reduces the design and evaluation time and brings positive environmental impact by enabling a more flexible design space with the proposed efficient one-shot searching method. Without further fine-tuning, our searched architectures can be directly deployed onto devices.
CPU Hours: 900
GPU Hours: 900
TPU Hours: 0
Evaluation Metrics: Yes
Estimated CO2e Footprint: 126
Class Of Approaches: Evolutionary search, one-shot NAS
Datasets And Benchmarks: MPII, COCO, CrowdPose
Performance Metrics: mIoU, mAP (mean Average Precision)
Benchmark Performance: Multi-Person Pose Estimation COCO test-dev, 71.6 AP Multi-Person Pose Estimation CrowdPose, 71.3 mAP @0.5:0.95
Benchmark Time: Multi-Person Pose Estimation COCO test-dev, 3 GPU days Multi-Person Pose Estimation CrowdPose, 3 GPU days
7 Replies

Loading