Abstract: Improving the efficiency of Neural Architecture Search (NAS) is a challenging but significant task that has received much attention. Previous studies mainly adopt the Differentiable Architecture Search (DARTS) and improve its search strategies or modules to enhance search efficiency. Recently, some methods have started considering data reduction for speedup, but they are not tightly coupled with the architecture search process and cannot capture the training dynamics of DARTS well, resulting in sub-optimal performances. To this end, this work pioneers an exploration into the critical role of dataset characteristics in the bi-level optimization of DARTS, and then proposes a novel Bi-level Data Pruning (BDP) paradigm that targets the weights and architecture levels of DARTS to enhance efficiency from a data perspective. Specifically, we introduce a progressive bi-level data pruning strategy that utilizes supernet prediction dynamics as the metric to gradually prune unsuitable samples for DARTS during the search. An effective automatic class balance constraint is also integrated into BDP, to suppress potential class imbalances resulting from data-efficient algorithms. Comprehensive evaluations on the NAS-Bench-201 search space, DARTS search space, and MobileNet-like search space validate that BDP reduces search costs by over 50% while achieving superior performance when applied to the baseline DARTS. Besides, we demonstrate that BDP can harmoniously integrate with advanced DARTS variants, like P-DARTS, PC-DARTS, EG-NAS, and $\beta $ -DARTS, offering an approximately $2\times $ speedup with minimal performance compromise.
Loading