A CGRA based Neural Network Inference Engine for Deep Reinforcement Learning

Published: 01 Jan 2018, Last Modified: 14 Nov 2024APCCAS 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent ultra-fast development of artificial intelligence algorithms has demanded dedicated neural network accelerators, whose high computing performance and low power consumption enable the deployment of deep learning algorithms on the edge computing nodes. State-of-the-art deep learning engines mostly support supervised learning such as CNN, RNN, whereas very few AI engines support on-chip reinforcement learning, which is the foremost algorithm kernel for decision-making subsystem of an autonomous system. In this work, a Coarse-grained Reconfigurable Array (CGRA) like AI computing engine has been designed for the deployments of both supervised and reinforcement learning. Logic synthesis at the design frequency of 200MHz based on 65nm CMOS technology reveals the physical statistics of the proposed engine of 0.32mm 2 in silicon area, 15.45 mW in power consumption. The proposed on-chip AI engine facilitates the implementation of end-to-end perceptual and decision-making networks, which can find its wide employment in autonomous driving, robotics and UAVs.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview