Improving Neural Architecture Search by Minimizing Worst-Case Validation Loss

TMLR Paper2634 Authors

06 May 2024 (modified: 19 May 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Neural architecture search (NAS) aims at automatically searching for high-performance architectures and has achieved considerable progress. Existing NAS methods learn architectures by minimizing average-case validation losses. As a result, the searched architectures are less capable of making correct predictions under worst-case scenarios. To address this problem, we propose a framework which leverages a deep generative model to generate adversarial validation examples to measure the worst-case validation performance of an architecture and improves the architecture by minimizing the loss on the generated adversarial validation data. Our framework is based on multi-level optimization, which performs multiple learning stages end-to-end. Experiments on a variety of datasets demonstrate the effectiveness of our method.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=wocL64DIFf&referrer=%5BAuthor%20Console%5D(%2Fgroup%3Fid%3DTMLR%2FAuthors%23your-submissions)
Changes Since Last Submission: Corrected the formatting issues.
Assigned Action Editor: ~Yaoliang_Yu1
Submission Number: 2634
Loading