Generalization Properties of NAS under Activation and Skip Connection SearchDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 31 Dec 2022, 00:55NeurIPS 2022 AcceptReaders: Everyone
Keywords: neural architecture search, convergence, generalization, neural tangent kernel
TL;DR: We provide covergence and generalization guarantees of neural architecture search under various activation functions and residual connections.
Abstract: Neural Architecture Search (NAS) has fostered the automatic discovery of state-of-the-art neural architectures. Despite the progress achieved with NAS, so far there is little attention to theoretical guarantees on NAS. In this work, we study the generalization properties of NAS under a unifying framework enabling (deep) layer skip connection search and activation function search. To this end, we derive the lower (and upper) bounds of the minimum eigenvalue of the Neural Tangent Kernel (NTK) under the (in)finite-width regime using a certain search space including mixed activation functions, fully connected, and residual neural networks. We use the minimum eigenvalue to establish generalization error bounds of NAS in the stochastic gradient descent training. Importantly, we theoretically and experimentally show how the derived results can guide NAS to select the top-performing architectures, even in the case without training, leading to a train-free algorithm based on our theory. Accordingly, our numerical validation shed light on the design of computationally efficient methods for NAS. Our analysis is non-trivial due to the coupling of various architectures and activation functions under the unifying framework and has its own interest in providing the lower bound of the minimum eigenvalue of NTK in deep learning theory.
Supplementary Material: pdf
27 Replies

Loading