Multi-objective optimization for Hardware-aware Neural Architecture SearchDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: multi-objective optimization, neural architecture search, evolutionary algorithm, hardware-aware, accuracy predictor, latency estimator, FPGA
Abstract: Hardware-aware Neural Architecture Search (HW-NAS) has been drawing increasing attention since it can automatically design deep neural networks optimized in a resource-constrained device. However, existing methods may be not optimal in terms of multi-object (accuracy, hardware-metrics). Thus, we propose a new multi-objective optimization method for searching promising architectures in HW-NAS. Our method addresses the architecture selection process in NAS. An architecture population is divided to small cells by given hardware-cost metrics, then, top-ranked architecture is selected within each cell. The selected ones play knobs to guide the direction of evolution in NAS. Despite its simplicity, this method leads to promising results, improving both accuracy and hardware metrics. Using latency as a hardware metric, we demonstrated that the optimized architecture extended its top accuracy to a much lower inference latency regime. We can also significantly reduce computing cost of search using both accuracy predictor and latency estimator and sharing pre-trained weights of super-network. This makes HW-NAS research more reproducible and accessible to the public. For a target hardware, we experimented on both CPU and Field Programmable Gate Array (FPGA). The codes are available at https://anonymous.4open.science/r/multi-objective-optimization-0E27/README.md.
One-sentence Summary: we propose a new multi-objective optimization method for searching the best architecture in HW-NAS.
12 Replies

Loading