Spiking Reinforcement Learning Enhanced by Bioinspired Event Source of Multi-Dendrite Spiking Neuron and Dynamic Thresholds
Abstract: Deep reinforcement learning (DRL) achieves success through the representational capabilities of deep neural networks (DNNs). Compared to DNNs, spiking neural networks (SNNs), known for their binary spike information processing, exhibit more biological characteristics. However, the challenge of using SNNs to simulate more biologically characteristic neuronal dynamics to optimize decision-making tasks remains, directly related to the information integration and transmission in SNNs. Inspired by the advanced computational power of dendrites in biological neurons, we propose a multi-dendrite spiking neuron (MDSN) model based on Multi-compartment spiking neurons (MCN), expanding dendrite types from two to multiple and deriving the analytical solution of somatic membrane potential. We apply the MDSN to deep distributional reinforcement learning to enhance its performance in executing complex decision-making tasks. The proposed model can effectively and adaptively integrate and transmit meaningful information from different sources. Our model uses a bioinspired event-enhanced dendrite structure to emphasize features. Meanwhile, by utilizing dynamic membrane potential thresholds, it adaptively maintains the homeostasis of MDSN. Extensive experiments on Atari games show that the proposed model outperforms some state-of-the-art spiking distributional RL models by a significant margin.
External IDs:dblp:journals/ieeejas/LiangWZTYS25
Loading