A Hybrid-Domain Floating-Point Compute-in-Memory Architecture for Efficient Acceleration of High-Precision Deep Neural Networks

Published: 01 Jan 2025, Last Modified: 16 May 2025CoRR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Compute-in-memory (CIM) has shown significant potential in efficiently accelerating deep neural networks (DNNs) at the edge, particularly in speeding up quantized models for inference applications. Recently, there has been growing interest in developing floating-point-based CIM macros to improve the accuracy of high-precision DNN models, including both inference and training tasks. Yet, current implementations rely primarily on digital methods, leading to substantial power consumption. This paper introduces a hybrid domain CIM architecture that integrates analog and digital CIM within the same memory cell to efficiently accelerate high-precision DNNs. Specifically, we develop area-efficient circuits and energy-efficient analog-to-digital conversion techniques to realize this architecture. Comprehensive circuit-level simulations reveal the notable energy efficiency and lossless accuracy of the proposed design on benchmarks.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview