Energy-Efficient and High-Throughput CNN Inference on Embedded CPUs-GPUs MPSoCs

Published: 01 Jan 2021, Last Modified: 06 Aug 2024SAMOS 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Nowadays, many application scenarios, such as mobile phones, drones, mobile robots, require Convolutional Neural Networks (CNNs) inference on embedded CPUs-GPUs MPSoCs. CNN model inference is usually computation intensive while the embedded CPUs-GPUs MPSoCs are usually energy consumption constrained. Therefore, how to achieve computationally-intensive CNN inference in an energy-efficient and high-throughput way is an important issue. However, existing Deep Learning (DL) frameworks only pay attention to achieving high-throughput inference when deploying CNN models on CPU or GPU processors without specifically considering the energy consumption.
Loading