NR-DARTS: Node Rewiring for Differentiable Architectures with Adaptive SE-Fusion

ICLR 2026 Conference Submission16841 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural Architecture Search, Pruning, Node Rewiring
Abstract: Efficient model design is critical for deployment on edge and embedded hardware where compute, latency, and energy budgets dominate feasibility, which has driven the adoption of Neural Architecture Search (NAS) to discover task specific backbones. Because multi objective search balances accuracy and efficiency under proxy evaluation, the resulting architectures can be suboptimal for deployment, and post search structured pruning is commonly applied to NAS discovered models to further reduce compute or latency while maintaining accuracy. However, conventional channel or operation level pruning is ill suited to NAS cells since local saliency proxies are unreliable under multi branch interactions and weight sharing, and fine grained removals break cell wise dimensional coupling and trigger cascading realignments. Thus, we propose NR-DARTS, Node Rewiring for Differentiable Architectures with Adaptive SE Fusion, which deletes low importance intermediate nodes scored by learnable gates. Then, the proposed method rewires their predecessors directly to each successor, and compensates at the successor input via a learned linear aggregation followed by channel wise SE recalibration. By preserving cell structure and feature dimensional consistency, our method avoids misalignment issues common in fine grained pruning and achieves reliable performance. On CIFAR-10 dataset, NR-DARTS reduces FLOPs by 27.3\% from 338.94M to 246.41M while maintaining accuracy at 93.81\% versus 94.07\% for the DARTS baseline and it outperforms channel and operation level pruning under matched budgets. Ablation studies further show that adaptive SE fusion improves accuracy at similar FLOPs compared to fixed summation and explain the effectiveness of the compensation mechanism.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 16841
Loading