Logarithm-Approximate Floating-Point Multiplier for Hardware-efficient Inference in Probabilistic Circuits

Published: 13 Jul 2023, Last Modified: 22 Aug 2023TPM 2023EveryoneRevisionsBibTeX
Keywords: Probabilistic Circuits, Hardware Acceleration, Probabilistic Machine Learning, Machine Learning
Abstract: Machine learning models are increasingly being deployed onto edge devices, for example, for smart sensing, reinforcing the need for reliable and effi- cient modeling families that can perform a variety of tasks in an uncertain world (e.g., classification, outlier detection) without re-deploying the model. Probabilistic circuits (PCs) offer a promising avenue for such scenarios as they support efficient and exact computation of various probabilistic inference tasks by design, in addition to having a sparse structure. A critical challenge towards hardware acceleration of PCs on edge devices is the high computational cost associated with mul- tiplications in the model. In this work, we propose the first approximate computing framework for energy-efficient PC computation. For this, we leverage addition-as-int approximate multipliers, which are significantly more energy-efficient than regular floating-point multipliers, while preserving computation accuracy. We analyze the expected approximation error and show through hardware simulation results that our approach leads to a significant reduction in energy consumption with low approximation error and provides a remedy for hardware acceleration of general-purpose probabilistic models.
Submission Number: 12
Loading