Keywords: spiking neural networks, explainable ai, layer-wise relevance propagation, conservation laws, numerical methods, saliency maps, event-driven systems, graph neural computation
TL;DR: We interpret LRP in SNNs as a discrete conservative transport scheme on the unrolled spatio-temporal graph, giving a continuity-equation view and implementation-level checks for explanation correctness.
Abstract: Explainability in event-driven dynamical models such as Spiking Neural Networks (SNNs) is often adapted from methods designed for static networks, even though the explanation itself is defined over a spatio-temporal computation graph. We present a formal view of Layer-wise Relevance Propagation (LRP) in SNNs as a discrete conservative transport scheme, where relevance is the transported quantity and the LRP redistribution rule defines a flux on graph edges. Under mild conditions, the backward pass satisfies a discrete continuity equation on the unrolled graph. This links LRP's conservation axiom to numerical conservation laws and yields implementation-level checks for explanation correctness, including local residual accounting and global relevance conservation. The contribution is conceptual and formal, with an illustrative anomaly-detection example.
Journal Opt In: Yes, I want to participate in the IOP focus collection submission
Journal Corresponding Email: research@sylvesterkaczmarek.com
Journal Notes: This workshop paper is intentionally concise and primarily theoretical. For the journal version, we plan a substantial extension rather than a minor revision: a fuller derivation of the discrete continuity formulation, clearer treatment of spike-threshold nodes, zero-denominator cases, stabilizer-induced sinks, and signed relevance channels, together with broader empirical validation across additional SNN architectures and datasets. We also plan expanded diagnostic analysis of local and global conservation residuals, stabilizer sensitivity, and comparisons with alternative attribution methods such as Integrated Gradients, DeepLIFT-style approaches, and existing SNN explanation baselines. The extended manuscript will also discuss generalization beyond the basic LIF setting, including richer neuron models and implementation considerations. We expect to develop this journal version after incorporating feedback from the workshop. No special constraints at this stage.
Submission Number: 1
Loading