Abstract: As neural networks are widely used in embedded and end-device applications, power consumption and performance optimization for specific applications, such as edge computing, are gaining increasing attention. FPGAs have been used in embedded neural network applications due to their high parallelism, energy efficiency, reconfigurability, and customizability. However, traditional FPGA programming techniques fail to meet the rapid and iterative development requirements of embedded applications, which also limit the development of FPGAs. Researchers have attempted to break through the language-level barriers and implement end-to-end flow. Xilinx’s Vitis and LeFlow are proposed to import and execute neural network applications on FPGAs. However, existing works are all focused on chips from foreign manufacturers such as Xilinx, and there lacks the support for domestic chips. This paper proposes a complete set of neural network compilation toolchain for domestic FPGA backend, which consist of a developer-friendly Python front-end and a complete FPGA back-end development flow that relieves FPGA developers from the cumbersome hardware programming tasks, and therefore improves the development efficiency of embedded neural network applications.The main contributions of this paper are listed as follows. (1) We propose a complete FPGA neural network toolchain; (2) We summarize a code specification friendly for high-level synthesis process, and based on this specification we propose a code generation algorithm based on the operator template. Experimental results show that the proposed general compilation toolchain works correctly for adapting the neural network on the backend of domestic FPGA, with a reduction of about 64% of the trigger resources compared with the generated code from the state-of-the-art compilation framework.
Loading