Keywords: Efficient DNN, Approximate Multiplier, Information Bottleneck, Efficient learning/Inference
TL;DR: Achieving efficient learning and inference while having accuracy improvement by using approximate multiplication (multiplier).
Abstract: Achieving higher accuracy in Deep Neural Networks (DNNs) often reaches a plateau despite extensive training, retraining, and fine-tuning. This paper introduces an analytical study using approximate multipliers to investigate potential accuracy improvements. Leveraging the principles of the Information Bottleneck (IB) theory, we analyze the enhanced information and feature extraction capabilities provided by approximate multipliers. Through Information Plane (IP) analysis, we gain a detailed understanding of DNN behavior under this approach. Our analysis indicates that this technique can break through existing accuracy barriers while offering computational and energy efficiency benefits. Compared to traditional methods that are computationally intensive, our approach uses less demanding optimization techniques. Additionally, approximate multipliers contribute to reduced energy consumption during both the training and inference phases. Experimental results support the potential of this method, suggesting it is a promising direction for DNN optimization.
Submission Number: 53
Loading