MOAZ: A Multi-Objective AutoML-Zero Framework

Published: 01 Jan 2023, Last Modified: 24 Jan 2025GECCO 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Automated machine learning (AutoML) greatly eases human efforts in architecture engineering. However, mainstream AutoML methods like neural architecture search (NAS) are customized for well-designed search spaces wherein promising architectures are densely distributed. In contrast, AutoML-Zero builds machine-learning algorithms using basic primitives and can explore novel architectures beyond human knowledge. AutoML-Zero shows the potential to deploy machine learning systems by not taking advantage of either feature engineering or architectural engineering. In its current form, it only optimizes a single objective like accuracy and has no mechanism to ensure that the constraints of real-world applications are satisfied. We propose a multi-objective variant of AutoML-Zero called MOAZ, that distributes solutions on a Pareto front by trading off accuracy against the computational complexity of the machine learning algorithm. In addition to generating different Pareto-optimal solutions, MOAZ can effectively explore the sparse search space to improve search efficiency. Experimental results on linear regression tasks show MOAZ reduces the median complexity by 87.4% compared to AutoML-Zero while accelerating the median target performance achievement speed by 82%. In addition, our preliminary results on non-linear regression tasks show the potential for further improvements in search accuracy and for reducing the need for human intervention in AutoML.
Loading