A High-Accuracy Probabilistic-Based Sigmoid Approximator Incorporating Memory-Saving and Time-Efficient Strategies

Wenhao Lu, Chi-Sing Leung, Feng Qin, Tiancheng Cao, Wenwen Zhang, Yucen Shi, Yiping Ke, Zhenya Zang

Published: 01 Jan 2026, Last Modified: 06 Apr 2026IEEE Transactions on Neural Networks and Learning SystemsEveryoneRevisionsCC BY-SA 4.0
Abstract: The sigmoid function, as a widely used activation function in neural networks, has gained much attention for its approximation and associated usage in edge devices. A recent study applied the Gaussian cumulative function to approximate the sigmoid function. Although this probabilistic method simplifies hardware implementation through a low-complexity binary search, it requires intensive random access memory (RAM) storage, and the search process is time-consuming. Besides, it targets minimizing the maximum mapping error rather than ensuring accurate approximation across all inputs. To address these issues, this article proposes a hardware-friendly and high-accuracy probabilistic-based sigmoid approximator. We first present that given an input, the output of a sigmoid function is strictly equivalent to the probability of a logistic random variable less than or equal to this input. Then, an indirect random variable quantizing strategy is exhibited to reduce memory usage and concurrently minimize precision loss. The latency for the proposed scheme is also optimized. Afterward, a resource-efficient and low-latency sigmoid approximator is developed on digital circuits. Finally, we derive an upper bound on the absolute error between the approximator’s output and the true value. Experiments verify the usefulness of our scheme and showcase superior performance in approximation accuracy and resource cost.
Loading