Learning to Recover Sparse SignalsDownload PDF

Published: 21 Oct 2019, Last Modified: 05 May 2023NeurIPS 2019 Deep Inverse Workshop PosterReaders: Everyone
Keywords: Compressed Sensing, Reinforcement Learning, Monte Carlo Tree Search, Basis Pursuit, Orthogonal Matching Pursuit
TL;DR: Formulating sparse signal recovery as a sequential decision making problem, we develop a method based on RL and MCTS that learns a policy to discover the support of the sparse signal.
Abstract: In compressed sensing, a primary problem to solve is to reconstruct a high dimensional sparse signal from a small number of observations. In this work, we develop a new sparse signal recovery algorithm using reinforcement learning (RL) and Monte CarloTree Search (MCTS). Similarly to orthogonal matching pursuit (OMP), our RL+MCTS algorithm chooses the support of the signal sequentially. The key novelty is that the proposed algorithm learns how to choose the next support as opposed to following a pre-designed rule as in OMP. Empirical results are provided to demonstrate the superior performance of the proposed RL+MCTS algorithm over existing sparse signal recovery algorithms.
1 Reply

Loading