Comparative Analysis of Inference Performance of Pre-Trained Deep Neural Networks in Analog Accelerators

Published: 2024, Last Modified: 24 Dec 2025ICMLA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Resistive crossbars using non-volatile memory devices have become promising components for implementing Deep Neural Network (DNN) in hardware. However, crossbar-based computations encounter a notable challenge due to device and circuit-level non-idealities. In this study, we explore three recent crossbar simulation tools, namely, AIHWKIT, Cross-Sim, and MemTorch, and evaluate four pre-trained DNNs for image classification on CIFAR-100 dataset using these tools. All three tools have strong analog simulation functionalities that effectively mimic actual hardware environment. To the best of our knowledge, this is the first study of using all three simulation tools of analog accelerators, representing three distinctive hardware settings, to evaluate DNN robustness under analog noise and nonlinearities. We first test the robustness of four DNNs (VGGI9, InceptionV3, ResNet50, & MobilenetV2) using different levels of white Gaussian noise as baselines. Then we evaluate their inference performance on the three tools to determine their resilience to analog noise and nonlinearities in hardware environment. Results show that while all DNNs suffer performance degradation as expected, ResNet50 outperforms others in two out of three simulators despite real-world hardware imperfections due to its deep structure. Specifically, the ResNet50 model shows a mere 3 % performance drop in CrossSim and, interestingly, a 2% improvement in AIHWKIT. At the same time, InceptionV3 exhibits only a 3 % drop from its baseline, while the performance of three other models declines by 12-19% in MemTorch underscoring InceptionV3's resilience in MemTorch environment. This indicates that model design of DNNs plays an important role in their resilience to analog noise and nonlinearities, and their inference performance also depends on specific hardware implementation.
Loading