DCP: Learning Accelerator Dataflow for Neural Network via Propagation

TMLR Paper1211 Authors

31 May 2023 (modified: 24 Aug 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Deep neural network (DNN) hardware (HW) accelerators have achieved great success in improving DNNs' performance and efficiency. One key reason is dataflow in executing a DNN layer, including on-chip data partitioning, computation parallelism, and scheduling policy, which have large impacts on latency and energy consumption. Unlike prior works that required considerable efforts from HW engineers to design suitable dataflows for different DNNs, this work proposes an efficient data-centric approach, named \Fname (\Aname), to automatically find the optimal dataflow for DNN layers in seconds without human effort. It has several attractive benefits that prior arts do not have. (i) We translate the HW dataflow configuration into a code representation in a unified dataflow coding space, which can be optimized by back-propagating gradients given a DNN layer or network. (ii) \Aname learns a neural predictor to efficiently update the dataflow codes towards the desired gradient directions to minimize various optimization objectives (\eg, latency and energy). (iii) It can be easily generalized to unseen HW configurations in a zero-shot or few-shot learning manner. For example, without using additional training data, DCP surpasses the GAMMA method that performs a full search using thousands of samples. Extensive experiments on several representative models such as MobileNet, ResNet, and ViT show that DCP outperforms its counterparts in various settings.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Brian_Kingsbury1
Submission Number: 1211
Loading