An Energy-Efficient Hybrid SRAM-Based In-Memory Computing Macro for Artificial Intelligence Edge Devices

Published: 01 Jan 2023, Last Modified: 13 May 2025Circuits Syst. Signal Process. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The von Neumann computing architecture faces considerable challenges (e.g., high throughput and improving energy efficiency) in developing artificial intelligence (AI) edge devices. In-memory computation (IMC) is a new computing paradigm to improve the energy efficiency and the throughput of dot product operations for AI edge devices. In this paper, a 6T2M hybrid SRAM (HSRAM)-based IMC macro is proposed that supports non-volatile storage and in-memory dot product (IMDP) operation. The HSRAM bit cell is designed using NMOS and memristor devices, which reduces the area overhead and improves the energy efficiency compared to prior SRAM-based IMC macro due to non-volatile storage capability. A 128 x 128 IMC macro based on HSRAM is designed in 65 nm technology. For normal memory operation, the read margin of the proposed HSRAM bit cell is improved by 84.1% compared to 4T2R ReRAM, and the write margin is enhanced by 44.01% compared to 8T SRAM. For IMDP operation, it can compute 128 parallel dot products on binary input and binary weight values with 500 MHz frequency and achieves the energy efficiency of 134.5 TOPS/W at VDD = 1V. According to Monte Carlo simulations, the IMDP operation has a standard deviation of 4.24 percent in accumulation, which equates to a classification accuracy of 96.71% on the MNIST dataset and an 82.51% on the CIFAR-10 dataset.
Loading