Abstract: Approximate computing (AxC) reduces power consumption with minimal accuracy loss, benefiting error-tolerant, compute-intensive tasks such as machine learning, deep learning, and image processing. However, existing AxC methods often ignore the vulnerability to soft errors. Such errors can interact with approximation-induced errors, causing system failures or unexpected exceptions. To our knowledge, no work has addressed both soft error resilience and exception avoidance in approximate floating-point computing. This gap is particularly critical in deep neural network (DNN) inference, where soft error-induced errors or exceptions can significantly affect the stability and accuracy of computations.In this paper, we introduce SERA-Float, an approximate floating-point format resilient to soft errors. Specifically, it is designed to protect floating-point computations from soft error-induced errors and exception-triggering bit-flips. Unlike prior floating-point formats, SERA-Float protects the sign and exponent bits using error-correcting codes and relies on storing 8 valid bits of mantissa rather than performing coarse truncation. Additionally, by tracking critical bits in the floating-point representation, SERA-Float prevents overflow, underflow, and NaN exceptions. Our evaluation demonstrates that SERA-Float improves the reliability of floating-point operations during DNN inference by significantly reducing exceptions and ensuring the stability of computations. Moreover, it enables energy-efficient arithmetic by leveraging narrower arithmetic units, yielding up to 80.3% energy savings per multiplication with a 0.9% reduction in DNN inference accuracy.
External IDs:dblp:conf/iccad/MishraTKSC25
Loading