LoRA-DARTS: Low Rank Adaptation for Differentiable Architecture Search

Published: 12 Jul 2024, Last Modified: 14 Aug 2024AutoML 2024 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural Architecture Search, Differentiable Architecture Search, NAS, DARTS
TL;DR: We apply low rank weight updates to candidate operations of a DARTS supernet to avoid the failure mode of overfitting the training dataset.
Abstract: Gradient-based one-shot neural architecture search (NAS) methods, such as Differentiable Architecture Search (DARTS), have emerged as computationally feasible techniques to explore large search spaces. However, DARTS still suffers from failure modes, such as choosing architectures that prefer skip connections over learnable operations. In this work, we propose that the use of a low-rank adaptation (LoRA) of the weights of the candidate operations can address this failure mode without introducing new regularization terms or significant changes to the DARTS search technique. The code for our work is available at https://github.com/automl/LoRA-DARTS.
Submission Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Optional Meta-Data For Green-AutoML: All questions below on environmental impact are optional.
Submission Number: 27
Loading