Confirmation: our paper adheres to reproducibility best practices. In particular, we confirm that all important details required to reproduce results are described in the paper,, the authors agree to the paper being made available online through OpenReview under a CC-BY 4.0 license (https://creativecommons.org/licenses/by/4.0/), and, the authors have read and commit to adhering to the AutoML 2025 Code of Conduct (https://2025.automl.cc/code-of-conduct/).
TL;DR: We propose an iterative MCTS for NAS to learn optimal order of nodes in search tree.
Abstract: Recent work has shown Monte-Carlo Tree Search (MCTS) as an effective approach for Neural Architecture Search (NAS) for producing competitive architectures. However, the performance of the tree search is highly sensitive to the node visiting order.
If the initial nodes are discriminative, good configurations can be efficiently found with minimal sampling. In contrast, non-discriminative initial nodes require exploring an exponential number of nodes before finding good solutions.
In this paper, we present an iterative for NAS approach to jointly train with MCTS and learn the optimal order of the nodes of the tree.
With our approach, the order of node visits in the tree is iteratively refined based on the estimated average accuracy of the nodes on the validation set. In this way, good architectures are more likely to naturally emerge at the beginning of the tree, improving the search process.
Experiments on two classification benchmarks and a segmentation tasks show that the proposed method can improve the performance of MCTS, compared to state-of-the-art MCTS approaches for NAS.
Submission Number: 42
Loading