Abstract: Sparse Bayesian Learning (SBL) is a powerful
framework for attaining sparsity in probabilistic
models. Herein, we propose a coordinate ascent
algorithm for SBL termed Relevance Matching
Pursuit (RMP) and show that, as its noise variance
parameter goes to zero, RMP exhibits a surprising
connection to Stepwise Regression. Further, we
derive novel guarantees for Stepwise Regression
algorithms, which also shed light on RMP. Our
guarantees for Forward Regression improve on
deterministic and probabilistic results for Orthogonal Matching Pursuit with noise. Our analysis of
Backward Regression on determined systems culminates in a bound on the residual of the optimal
solution to the subset selection problem that, if satisfied, guarantees the optimality of the result. To
our knowledge, this bound is the first that can be
computed in polynomial time and depends chiefly
on the smallest singular value of the matrix. We
report numerical experiments using a variety of
feature selection algorithms. Notably, RMP and
its limiting variant are both efficient and maintain
strong performance with correlated features.
0 Replies
Loading