DNA: A General Dynamic Neural Network Accelerator

Published: 2025, Last Modified: 06 Jan 2026IEEE Trans. Computers 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Due to the demonstrated superiority, dynamic neural networks (NNs), which adapt their network structures to different inputs, have been recognized as an optimized alternative to conventional static NNs. However, researchers have not explored the implications of dynamic NN on neural processing unit (NPU) architecture design. Consequently, we analyze the characteristics and inefficient sources of executing dynamic NNs on existing hardware. From our analysis, existing NPUs, designed for static NNs, cannot effectively handle the execution of dynamic operator and agent-dependent data loading in dynamic NNs. To this end, we present DNA, an efficient accelerator optimized to deal with the challenges of running general dynamic NNs. Firstly, to improve the execution efficiency of dynamic operators, we propose a transverter-based online scheduling strategy to rapidly generate efficient scheduling for each dynamic operator. Secondly, to mitigate hardware idleness caused by the non-deterministic and agent-dependent data access patterns in dynamic NNs, we propose a novel predictor-based prefetching strategy that achieves effective data preloading with negligible cost. We implemented our accelerator, DNA, by integrating an additional online scheduler into a typical many-core baseline accelerator. According to our evaluation of various dynamic NNs, DNA achieves $3.48\boldsymbol\times$ speedup and $3.03\boldsymbol\times$ energy savings over the baseline accelerator.
Loading