Keywords: Compute in Memory, Deep Learning, Noise aware training, binary neural network, Simulation
TL;DR: We show how a differentiable simulator for compute in memory hardware can make neural networks robust and inform hardware design
Abstract: Compute in Memory (CIM) accelerators for neural networks promise large efficiency gains, allowing for deep learning applications on extremely resource-constrained devices. Compared to classical digital processors, computations on CIM accelerators are subject to a variety of noise sources such as process variations, thermal effects, quantization, and more. In this work, we show how fundamental hardware design choices influence the predictive performance of neural networks and how training these models to be hardware-aware can make them more robust for CIM deployment. Through various experiments, we make the trade-offs between energy efficiency and model capacity explicit and showcase the benefits of taking a systems view on CIM accelerator and neural network training co-design.
4 Replies
Loading