DHA: End-to-End Joint Optimization of Data Augmentation Policy, Hyper-parameter and Architecture

Published: 03 Nov 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Automated machine learning (AutoML) usually involves several crucial components, such as Data Augmentation (DA) policy, Hyper-Parameter Optimization (HPO), and Neural Architecture Search (NAS). Although many strategies have been developed for automating these components in separation, joint optimization of these components remains challenging due to the largely increased search dimension and the variant input types of each component. In parallel to this, the common practice of searching for the optimal architecture first and then retraining it before deployment in NAS often suffers from the low-performance correlation between the searching and retraining stages. An end-to-end solution that integrates the AutoML components and returns a ready-to-use model at the end of the search is desirable. In view of these, we propose DHA, which achieves joint optimization of Data augmentation policy, Hyper-parameter, and Architecture. Specifically, end-to-end NAS is achieved in a differentiable manner by optimizing a compressed lower-dimensional feature space, while DA policy and HPO are regarded as dynamic schedulers, which adapt themselves to the update of network parameters and network architecture at the same time. Experiments show that DHA achieves state-of-the-art (SOTA) results on various datasets and search spaces. To the best of our knowledge, we are the first to efficiently and jointly optimize DA policy, NAS, and HPO in an end-to-end manner without retraining.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: **Abstract** - Modify the sentence "while ... at the same time'' to "while DA policy and HPO are regarded as dynamic schedulers, which adapt themselves to the update of network parameters and architecture at the same time''. **Introduction** - Remove the sentence "In DHA, ... super-network." **Experiments** - Search space. Add the discussion about generalizing the augmentation optimizer to incorporate more augmentation strategies. - Baselines. Add the discussion about how to implement the baseline methods for a fair comparison, i.e., the fairness of the experiments. - Implementation details. Add the discussion about setting the hyperparameters of DHA. - Ablation study. Add the additional results on the FLOPS/Latency of the discovered models.
Video: https://drive.google.com/drive/folders/185QTv2rVkUZudCC4w1qxRGXuYt313qum?usp=sharing
Code: https://github.com/gitkaichenzhou/DHA
Assigned Action Editor: ~Kevin_Swersky1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 255
Loading