Abstract: Linear recurrent neural networks enable powerful long-range sequence modeling with constant memory usage and time-per-token during inference. These architectures hold promise for streaming applications at the edge, but deployment in resource-constrained environments requires hardware-aware optimizations to minimize latency and energy consumption.
Unstructured sparsity offers a compelling solution, enabling substantial reductions in compute and memory requirements--when accelerated by compatible hardware platforms.
In this paper, we conduct a scaling study to investigate the Pareto front of performance and efficiency across inference compute budgets.
We find that highly sparse linear RNNs *consistently* achieve better efficiency-performance trade-offs than dense baselines, with $2\times$ less compute and $36$\% less memory at iso-accuracy.
Our models achieve state-of-the-art results on a real-time streaming task for audio denoising.
By quantizing our sparse models to fixed-point arithmetic and deploying them on the Intel Loihi 2 neuromorphic chip for real-time processing, we translate model compression into tangible gains of $42\times$ lower latency and $149\times$ lower energy consumption compared to a dense model on an edge GPU.
Our findings showcase the transformative potential of unstructured sparsity, paving the way for highly efficient recurrent neural networks in real-world, resource-constrained environments.
Lay Summary: Novel computing architectures show great promise in bringing energy-efficient AI capabilities to the edge. In this paper, we demonstrate how sparse models, which have a large portion of their parameters set to zero, can achieve the same accuracy as dense models in audio denoising and keyword spotting while requiring a fraction of the compute and memory. Moreover, we show that the Intel Loihi 2 neuromorphic research chip can leverage this sparsity to run models with $42\times$ lower latency and $149\times$ lower energy consumption compared to a dense model on an edge GPU.
Link To Code: https://github.com/IntelLabs/SparseRNNs
Primary Area: General Machine Learning->Hardware and Software
Keywords: Linear RNNs, Sparsity, Pruning, Quantization, Neuromorphic Hardware
Submission Number: 15842
Loading