- Keywords: neural architecture search, nas, automl
- TL;DR: This paper proposed a novel neural architecture search framework, which enables reinforcement learning to search in an embedding space by using architecture encoders and decoders.
- Abstract: The neural architecture search (NAS) algorithm with reinforcement learning can be a powerful and novel framework for the automatic discovering process of neural architectures. However, its application is restricted by noncontinuous and high-dimensional search spaces, which result in difficulty in optimization. To resolve these problems, we proposed NAS in embedding space (NASES), which is a novel framework. Unlike other NAS with reinforcement learning approaches that search over a discrete and high-dimensional architecture space, this approach enables reinforcement learning to search in an embedding space by using architecture encoders and decoders. The current experiment demonstrated that the performance of the final architecture network using the NASES procedure is comparable with that of other popular NAS approaches for the image classification task on CIFAR-10. The beneficial-performance and effectiveness of NASES was impressive even when only the architecture-embedding searching and pre-training controller were applied without other NAS tricks such as parameter sharing. Specifically, considerable reduction in searches was achieved by reducing the average number of searching to < 100 architectures to achieve a final architecture for the NASES procedure.
- Code: https://anonymous.4open.science/r/b5cee050-c345-4acf-bc34-4d7233edbe80/