PASHA: Efficient HPO with Progressive Resource AllocationDownload PDF

25 Feb 2022 (modified: 22 Oct 2023)AutoML 2022 (Late-Breaking Workshop)Readers: Everyone
Abstract: Hyperparameter optimization (HPO) and neural architecture search (NAS) are methods of choice to obtain the best-in-class machine learning models, but in practice they can be costly to run. When models are trained on large datasets, tuning them with HPO or NAS rapidly becomes prohibitively expensive for practitioners, even when efficient multi-fidelity methods are employed. We propose an approach to tackle the challenge of tuning machine learning models trained on large datasets with limited computational resources. Our approach, named PASHA, is able to dynamically allocate maximum resources for the tuning procedure depending on the need. The experimental comparison shows that PASHA identifies well-performing hyperparameter configurations and architectures while consuming significantly fewer computational resources than solutions like ASHA.
Keywords: Neural architecture search, Hyperparameter optimization, Large datasets, Computational efficiency, Cost
One-sentence Summary: Efficient multi-fidelity method with progressive resource allocation for HPO and NAS.
Track: Main track
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Ondrej Bohdal, ondrej.bohdal@ed.ac.uk
CPU Hours: 0
GPU Hours: 0
TPU Hours: 0
Evaluation Metrics: Yes
Datasets And Benchmarks: NAS-Bench-201
Performance Metrics: Accuracy, F1
Main Paper And Supplementary Material: pdf
Code And Dataset Supplement: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:2207.06940/code)
0 Replies

Loading