Bridging Quantitative Optimization and Qualitative Reasoning: LLM-Enhanced Neural Architecture Search with Synergistic Weights

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural Architecture search(NAS), Differentiable architecture search(DARTS), Synergistic weights, Large language models, LLM reasoning
Abstract: Differentiable Neural Architecture Search (NAS) has revolutionized deep learning by automating architecture design, but still faces two interdependent limitations: unreliable connection evaluation based solely on local edge weights leads to suboptimal architecture discretization, while static search spaces prevent the discovery of innovative patterns for optimization. Existing approaches treat these as separable problems, lacking either architectural insight or quantitative grounding. To bridge the gap between quantitative optimization and qualitative reasoning directly, we propose SWNAS, which introduces two key innovations: (1) Synergistic Weights that combine edge and node importance for globally-aware architecture evaluation, overcoming myopic local optimization limitations, and (2) Large Language Model (LLM)-guided dynamic search space evolution that enables intelligent topology expansion beyond fixed constraints. Unlike indirect code generation or heuristic rules, SWNAS directly reason with quantitative structural signals to refine discretization and guide strategic node placement, establishing true large-small model collaboration. Extensive experiments demonstrate SWNAS's effectiveness: achieving 2.33\% error rate on CIFAR-10 and 23.9\% on ImageNet, while maintaining computational efficiency. Our modular design enables seamless integration into existing DARTS-family methods, consistently improving performance by 0.16-0.19\% across frameworks. Importantly, SWNAS demonstrates robust generalizability across different search spaces and maintains stable performance across multiple LLMs, demonstrating that genuine quantitative-qualitative integration can systematically advance neural architecture discovery.
Supplementary Material: pdf
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 10136
Loading