A Simple and Fast Baseline for Tuning Large XGBoost ModelsDownload PDF

Anonymous

30 Sept 2021 (modified: 12 Mar 2024)NeurIPS 2021 Workshop MetaLearn Blind SubmissionReaders: Everyone
Keywords: xgboost, hyperband, hyperparameter optimization
Abstract: XGBoost, a scalable tree boosting algorithm, has proven effective for many prediction tasks of practical interest, especially using tabular datasets. Hyperparameter tuning can further improve the predictive performance, but training many models on large datasets can be time consuming. Owing to the discovery that (i) there is a strong linear relation between dataset size & training time, (ii) XGBoost models satisfy the *ranking hypothesis*, and (iii) lower-fidelity models can discover promising hyperparameter configurations, we show that uniform subsampling makes for a simple yet fast baseline to speed up the tuning of large XGBoost models using multi-fidelity hyperparameter optimization with data subsets as the fidelity dimension. We demonstrate the effectiveness of this baseline on large-scale tabular datasets ranging from $15-70\mathrm{GB}$ in size.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2111.06924/code)
0 Replies

Loading