Abstract: Neural Architecture Search (NAS) has shown its potential in aiding in the development of more efficient neural networks. In regard to hardware, efficiency often equates to power usage or latency. Over the years many researchers have incorporated hardware performance into their NAS experiments. However, accurately modelling hardware performance is a challenge in itself. We look to the field of design space exploration (DSE) for more precise performance metrics on neural network accelerators and incorporate the results into our NAS search. Our experiments show that doing so achieves a significant reduction in latency and energy consumption. The approach we propose also enables detailed insight in the breakdown of the energy consumption and latency of the optimised model.
External IDs:dblp:conf/cvpr/HendrickxSRVG23
Loading