Abstract: Highlights•We employ a hybrid strategy that automatically switches from Adam to SGD to reduce gradient sparsity’s impact during the learning process of BSNNs. This approach significantly accelerates optimization while improving accuracy.•We introduce a simple shift-based BN algorithm to accelerate the inference and achieve the effect equivalent to the computationally expensive BN with low accuracy loss.•This work represents the first systematic comparison revealing the effectiveness of commonly used strategies in optimizing BSNN models, laying the groundwork for developing robust theoretical foundations in this field moving forward.•Exhaustive benchmark comparison on various neuromorphic datasets shows that the proposed framework achieves consistently higher accuracy at a much lower storage requirement than other state-of-the-art SNN systems.
Loading