Abstract: Graph neural networks have recently emerged as a promising approach for low-power and low-latency event-vision applications. Such event-graphs naturally exploit the sparsity of event-data and incorporate the temporal detail captured by event-based sensors directly into the edge features used in graph convolution. In this paper we study the promise of event-graphs for processing data from other event-based modalities beyond vision. Specifically, we describe how the approach can be adapted to the spectro-temporal domain to perform event-audio classification. We evaluate the approach using the spiking Heidelberg digits dataset and achieve a test accuracy of 94.3%. This is notably better than many state of the art spiking neural networks despite, in many cases, requiring an order of magnitude fewer parameters. Event-graph neural networks promise to be a powerful, general approach for processing a variety of event-based modalities, not only vision.
External IDs:dblp:conf/iscas/RafeldtMNDMVPD25
Loading