MuNAS: TinyML Network Architecture Search Using Goal Attainment and Reinforcement Learning

Published: 01 Jan 2024, Last Modified: 07 Mar 2025MECO 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Embedded Machine Learning (ML) is increasingly pivotal in contemporary data-driven applications, mainly when operating on tiny, resource-constrained devices where model size and computational efficiency are critical. The traditional approach to crafting efficient Neural Networks (NNs) has predominantly been the purview of domain experts, entailing a substantial degree of specialised knowledge and iterative experimentation. Consequently, the automation of the neural architecture design process has emerged as a significant field of interest within the ML community over the past decade, especially for Evolutionary Computation (EC) based NAS (ENAS) applications. This interest is fueled mainly by the rapidity with which Tiny Machine Learning (TinyML) models can be developed and trained.This paper introduces MuNAS, a novel NAS framework that integrates Evolutionary Algorithms (EAs) with a Goal Attainment (GA) fitness function and Reinforcement Learning (RL) for enhanced mutation guidance. MuNAS is designed to expedite the optimisation of pre-existing models and to facilitate the generation of constrained yet highly optimised TinyML models from the ground up. Through the introduction of a novel building block mechanism, which infuses domain-specific knowledge into the NN construction process, we demonstrate the capability of MuNAS to modify and improve upon popular models from the MLPerf Tiny benchmark suite as part of an evolutionary algorithmic process.
Loading