DARTS: Differentiable Architecture SearchDownload PDF

Published: 21 Dec 2018, Last Modified: 29 Sept 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques.
Keywords: deep learning, autoML, neural architecture search, image classification, language modeling
TL;DR: We propose a differentiable architecture search algorithm for both convolutional and recurrent networks, achieving competitive performance with the state of the art using orders of magnitude less computation resources.
Code: [![github](/images/github_icon.svg) quark0/darts](https://github.com/quark0/darts) + [![Papers with Code](/images/pwc_icon.svg) 56 community implementations](https://paperswithcode.com/paper/?openreview=S1eYHoC5FX)
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [ImageNet](https://paperswithcode.com/dataset/imagenet), [NAS-Bench-201](https://paperswithcode.com/dataset/nas-bench-201), [Penn Treebank](https://paperswithcode.com/dataset/penn-treebank)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 46 code implementations](https://www.catalyzex.com/paper/darts-differentiable-architecture-search/code)
44 Replies

Loading