Regularized Meta-Learning for Neural Architecture SearchDownload PDF

25 Feb 2022 (modified: 05 May 2023)AutoML 2022 (Late-Breaking Workshop)Readers: Everyone
Abstract: Neural architecture search (NAS) methods have successfully enabled the automated search of neural architectures in various domains. However, most techniques start from scratch with every new task. Techniques have been proposed that generalize across tasks, but don't always adapt well to new tasks. In this work, we consider meta-learning approaches that effectively leverage prior experience to adapt to unseen tasks. We analyze different regularization and training methods to improve the generalizability of meta-learning for NAS. Empirical results on standard few-shot classification benchmarks show that the added regularization and adjustments in the network optimization improve upon previous approaches, such as MetaNAS.
Keywords: Neural Architecture Search, Meta-Learning, Meta-Reinforcement Learning, Few-Shot Learning
One-sentence Summary: We consider meta-learning approaches that effectively leverage prior experience to adapt to unseen tasks in neural architecture search.
Track: Main track
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: rob van gastel, r.v.gastel@student.tue.nl
CPU Hours: 1160
GPU Hours: 1160
TPU Hours: 0
Evaluation Metrics: Yes
Class Of Approaches: Meta-Learning, Gradient-based Methods
Datasets And Benchmarks: Omniglot, MiniImageNet, TripleMNIST
Performance Metrics: Accuracy
Main Paper And Supplementary Material: pdf
Estimated CO2e Footprint: 125.28
0 Replies

Loading