Abstract: The rise of machine learning (ML) has necessitated the development of innovative processing engines. However, devel-opment of specialized hardware accelerators can incur enormous one-time engineering expenses that should be avoided in low-cost embedded ML systems. In addition, embedded systems have tight resource constraints that prevent them from affording the “full-blown” machine learning (ML) accelerators seen in many cloud environments. In embedded situations, a custom function unit (CFU) that is more lightweight is preferable. We offer CFU Playground, an open-source toolchain for accelerating embedded machine learning (ML) on FPGAs through the use of CFUs.
Loading