Tartan: Accelerating Fully-Connected and Convolutional Layers in Deep Learning Networks by Exploiting Numerical Precision VariabilityDownload PDF

29 Mar 2024 (modified: 21 Jul 2022)Submitted to ICLR 2017Readers: Everyone
Abstract: Tartan {TRT} a hardware accelerator for inference with Deep Neural Networks (DNNs) is presented and evaluated on Convolutional Neural Networks. TRT exploits the variable per layer precision requirements of DNNs to deliver execution time that is proportional to the precision p in bits used per layer for convolutional and fully-connected layers. Prior art has demonstrated an accelerator with the same execution performance only for convolutional layers. Experiments on image classification CNNs show that on average across all networks studied, TRT outperforms a state-of-the-art bit-parallel accelerator by 1.90x without any loss in accuracy while it is 1.17x more energy efficient. TRT requires no network retraining while it enables trading off accuracy for additional improvements in execution performance and energy efficiency. For example, if a 1% relative loss in accuracy is acceptable, TRT is on average 2.04x faster and 1.25x more energy efficient than the bit-parallel accelerator. This revision includes post-layout results and a better configuration that processes 2bits at time resulting in better efficiency and lower area overhead.
TL;DR: A hardware accelerator whose execution time for Fully-Connected and Convolutional Layers in CNNs vary inversely proportional with the number of bits used to represent the input activations and/or weights.
Conflicts: eecg.toronto.edu, cs.toronto.edu, ece.toronto.edu, toronto.edu, utoronto.ca
Keywords: Deep learning, Applications
20 Replies

Loading