Towards a High Resolution Multimodal Neuromorphic Eventset

Published: 01 Jan 2025, Last Modified: 25 Jul 2025ISCAS 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The number of deployed artificial intelligence (AI) models is growing rapidly. Large language models (LLMs) are both compute and memory intensive in training and inference, regularly having billions of parameters and requiring petabytes of training data. Neuromorphic computing seeks to improve scaling in machine learning by bypassing the memory bandwidth limitations inherent in hardware that LLMs are currently run on, by designing biologically-inspired systems. Neuromorphic algorithm-hardware co-design requires reproducible low-level, hardware-optimised training and testing inputs. In this work, we propose a framework to generate the first high-resolution, multimodal, phonetically-rich neuromorphic eventset, including a novel synchronisation signal. This overcomes the limitations of datasets by encoding information in the form of events, a low-level representation that capture the underlying physical phenomenon with high fidelity while remaining sparse, compressed and digital.We show that events are highly compressible and that datasets recorded with traditional methods would require more than 100× more memory to capture the same level of temporal granularity.
Loading