Abstract: Low power Internet of Things devices are growing in number and computational capabilities, pushing to ubiquitous deployment of smart sensors that embed on board both the sensing and the processing. Thus, one of the most emerging requirements of such devices is to provide intelligence in resource-constrained processors that consume few milliwatts of power. This work focuses on surveying, comparing and evaluating seven different recent and popular microcontrollers with a power envelope from a few up to hundreds of milliwatts against a Convolutional Neural Networks workload for a non trivial task such as face recognition. The evaluation reports key points of tiny machine learning performance of the target microcontrollers in terms of inference time, power consumption, energy per inference and computational efficiency. Experimental results highlight best-in-class power consumption for Ambiq Apollo3 and Sony Spresense at 41.3 μW/MHz and 128.2 μW/MHz respectively. The computational efficiency primacy goes instead to the MAX78000 and then to xCORE.ai at 117 MAC/cycle and 7.69 MAC/cycle respectively, achieving the fastest inference at 1.4 ms and 1.5 ms respectively. The platforms that required the least energy per inference were the MAX78000 and GAP8, at 0.09 mJ/inference and 0.52 mJ/inference respectively. The benchmarked tinyML network will be released openly to allow other researchers to run future comparisons on novel low power microprocessors.
0 Replies
Loading