Abstract: In this paper we investigate different approximate message passing (AMP) algorithms for recovering sparse signals measured in a compressed unlimited sampling (US) framework. More specifically, besides our previous work on the generalized approximate message passing (GAMP) algorithm, in this work, using an alternative formulation of the US recovery problem we consider the Bayesian approximate message passing (BAMP) algorithm. Furthermore, we consider learned versions of the two algorithms based on modelling source prior with a Gaussian-mixture (GM), which can well approximate continuous, discrete, as well as mixture distributions. Thus we propose the learned Gaussian mixture GAMP (L-GM-GAMP) and the learned Gaus-sian mixture AMP (L-GM-AMP) algorithms for the US recovery problem - two plug-and-play algorithms which learn the source distribution and the algorithms' tunable parameters in a supervised manner. To empirically show the effectiveness of the aforementioned algorithms we conduct Monte-Carlo (MC) simulations. The results show that the computationally more stable learned AMP (LAMP) requires slightly more measurements to reach the same accuracy as the GAMP algorithm. Additionally, we observe that within the US framework, the algorithms using the learning approach, namely L-GM-AMP and L-GM-GAMP, achieve the same accuracy and reduce the amount of required prior knowledge, at the expense of prior algorithm training.
Loading