An Automated Approach to Accelerate DNNs on Edge DevicesDownload PDFOpen Website

2021 (modified: 15 Jan 2022)ISCAS 2021Readers: Everyone
Abstract: Deployment of Deep Neural Networks (DNNs) on edge devices can significantly increase the utility of DNNs for a variety of applications. However executing DNN models on the edge device is still a major challenge, as heavy computation and memory bandwidth requirements of such models limit their adoption. Employing a highly optimized code for DNN model execution can easily enable many more use-cases than currently possible. However, current strategies are still based on manual optimization for efficient resource utilization. This is not only cumbersome but also requires a high level of expert intervention in the rapidly changing DNN Model landscape. In this work, we provide an automated way of optimizing Convolutional Neural Network (CNN) models using Deep Reinforcement Learning (DRL) algorithm. The experiments with our DRL technique demonstrate 1.85×, 1.58×, 1.64× speedup in execution time for MobileNetV1, MobileNetV2 and Efficientnet-lite0 CNN models respectively on Mobile CPU devices.
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview