Keywords: SNNs, hardware faults, bottleneck problem, practicality, small fragments
Abstract: Spiking Neural Networks (SNNs) attract researchers due to their energy-efficient operations in neuromorphic devices. Despite their energy efficiency, SNNs are vulnerable to hardware faults, which impair the functionality of learnable parameters (e.g., Stuck-At-Faults (SAFs) in synaptic weights). This impairment reduces the capacity to absorb information. When input data contains information exceeding the capacity, SNNs may not absorb information correctly, referred to as **the bottleneck problem**. Existing approaches have relied on complex algorithms or direct modification to most synaptic weights in SNNs, limiting their practicality in neuromorphic devices. This paper proposes a simple yet effective input control mechanism to address the problem, grounded in a thorough motivation study. Our mechanism divides the input samples into small fragments, following the best fragmentation strategy, derived by analyzing the characteristics of the input samples and diagnosing the current influence of faults. Experimental results demonstrate that our mechanism significantly enhances fault tolerance over existing methods, achieving these gains without complex algorithms or direct weight modification in various SNN models. Additionally, our mechanism improves the fault tolerance of SNN models implemented in a Field-Programmable Gate Array (FPGA) device.
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Submission Number: 12869
Loading