DOI: 10.64028/ldgc381872
Keywords: Bayesian regularization, Bayesian stacking, predictive performance
TL;DR: This work examines whether Bayesian regularization improves Bayesian stacking predictive performance.
Abstract: Bayesian stacking is a procedure adapted from machine learning that allows researchers to combine multiple unique models and optimize overall predictions, with the added benefit of not relying on strong assumptions necessary for Bayesian model averaging (BMA). For individual models, Bayesian regularization methods via sparsity-inducing priors elicit stronger predictive accuracy than unregularized modeling approaches. While model stacking is not intended to serve as a method for performing variable selection, we are unaware of any systematic investigation examining how sparsity-inducing priors applied to member models in a stack could conceivably lead to more accurate predictions. The present work investigates whether the addition of Bayesian regularization via sparsity-inducing priors of individual member models can be a worthwhile practice when using Bayesian stacking procedures. Against our expectations, we find that inducing sparsity in stacking member models does not improve predictive performance. Other results and limitations of this work are also discussed.
Supplementary Material: zip
Submission Number: 4
Loading