Abstract: We present Probabilistic Orthogonal Matching Pursuit (PrOMP), a novel probabilistic approach that builds upon orthogonal matching pursuit (OMP) for sparse representations of data. Like OMP, PrOMP is a greedy algorithm for regression that iteratively selects columns of a matrix according to a score. This score is based on a rarely employed feature of the EM algorithm and thus optimizes a marginal probability distribution. While OMP uses correlation as the score, in our probabilistic approach we define the scores to be the value of the resulting marginal likelihood—if adding a new signal does not improve this term, the algorithm automatically terminates. Our theoretical analysis also builds on the previous theory for OMP. We demonstrate the algorithm with a focus on a sparse dictionary learning and signal representation task using Bayesian nonparametrics. We first consider the nonparametric Beta Process Factor Analysis (BPFA) model. In addition, we present a new model based on BPFA that we call Beta Process Subspace Analysis (BPSA) which learns a set of subspaces and their respective dimensionalities from data.
Loading