Abstract: Recent years have emerged a surge of interest in spiking neural networks (SNNs). The performance
of SNNs hinges not only on searching apposite architectures and connection weights, similar to
conventional artificial neural networks, but also on the meticulous configuration of their intrinsic
structures. However, there has been a dearth of comprehensive studies examining the impact of
intrinsic structures; thus developers often feel challenging to apply a standardized configuration of
SNNs across diverse datasets or tasks. This work delves deep into the intrinsic structures of SNNs.
Initially, we draw two key conclusions: (1) the membrane time hyper-parameter is intimately linked
to the eigenvalues of the integration operation, dictating the functional topology of spiking dynamics;
(2) various hyper-parameters of the firing-reset mechanism govern the overall firing capacity of
an SNN, mitigating the injection ratio or sampling density of input data. These findings elucidate
why the efficacy of SNNs hinges heavily on the configuration of intrinsic structures and lead to a
recommendation that enhancing the adaptability of these structures contributes to improving the
overall performance and applicability of SNNs.
Inspired by this recognition, we propose two feasible approaches to enhance SNN learning,
involving developing self-connection architectures and stochastic spiking neurons to augment the
adaptability of the integration operation and firing-reset mechanism, respectively. We theoretically
prove that (1) both methods promote the expressive property for universal approximation, (2) the
incorporation of self-connection architectures fosters ample solutions and structural stability for
SNNs approximating adaptive dynamical systems, (3) the stochastic spiking neurons maintain generalization bounds with an exponential reduction in Rademacher complexity. Empirical experiments
conducted on various real-world datasets affirm the effectiveness of our proposed methods.
Loading