Improving Neural Architecture Search by Minimizing Worst-Case Validation Loss

TMLR Paper2634 Authors

06 May 2024 (modified: 06 Oct 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Neural architecture search (NAS) aims at automatically searching for high-performance architectures and has achieved considerable progress. Existing NAS methods learn architectures by minimizing average-case validation losses. As a result, the searched architectures are less capable of making correct predictions under worst-case scenarios. To address this problem, we propose a framework which leverages a deep generative model to generate adversarial validation examples to measure the worst-case validation performance of an architecture and improves the architecture by minimizing the loss on the generated adversarial validation data. Our framework is based on multi-level optimization, which performs multiple learning stages end-to-end. Experiments on a variety of datasets demonstrate the effectiveness of our method.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission:

Corrected the formatting issues.

Assigned Action Editor: Yaoliang Yu
Submission Number: 2634
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview