A Hardware-Aware Framework for Accelerating Neural Architecture Search Across ModalitiesDownload PDF

25 Feb 2022 (modified: 05 May 2023)AutoML 2022 (Late-Breaking Workshop)Readers: Everyone
Abstract: Recent advances in Neural Architecture Search (NAS) such as one-shot NAS offer the ability to extract specialized hardware-aware sub-network configurations from a task-specific super-network. While considerable effort has been employed towards improving the first stage, namely, the training of the super-network, the search for derivative high-performing sub-networks is still under-explored. We propose a flexible search framework that automatically and efficiently finds sub-networks that are optimized for different performance metrics and hardware configurations. Specifically, we demonstrate how various evolutionary algorithms when paired with lightly trained objective predictors can accelerate architecture search in a multi-objective setting for various modalities including machine translation, recommendation, and image classification.
Keywords: neural architecture search, multi-objective optimization, evolutionary algorithms, machine translation, computer vision
One-sentence Summary: In the context of various modalities, we examine how evolutionary algorithms paired with iteratively trained performance predictors can accelerate neural architecture search.
Track: Main track
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Maciej Szankin, maciej.szankin@intel.com Sharath Nittur Sridhar, sharath.nittur.sridhar@intel.com Pablo Munoz, pablo.munoz@intel.com
CPU Hours: 1600
GPU Hours: 1500
TPU Hours: 0
Evaluation Metrics: Yes
Class Of Approaches: Evolutionary Methods, Sequential Model-based Optimization
Datasets And Benchmarks: ImageNet, WMT 2014 En-De, Pinterest-20, OFA MobileNetV3
Performance Metrics: Accuracy, Latency, BLEU score, Hit Rate
Main Paper And Supplementary Material: pdf
Steps For Environmental Footprint Reduction During Development: For tuning the parameters of the evolutionary algorithms, we used predictor-based objective measurements to avoid the high compute cost of making validation measurements for every DNN architecture during the NAS process.
Code And Dataset Supplement: zip
Estimated CO2e Footprint: 149
0 Replies

Loading