Abstract: The problem of computing minimally sparse solutions of under-determined linear systems $Ax=b$ is $NP$ hard in general. Subsets with extra properties, may allow efficient algorithms, most notably problems with the restricted isometry property (RIP) can be solved by convex $\ell_1$-minimization. While these classes have been very successful, they leave out many practical applications. \review{Alternative sub-classes, can be based on the prior information that $x=Xz$ is in the (sparse) span of some suitable matrix $X$. The prior knowledge allows us to reduce assumptions on $A$ from RIP to stable rank and by means of choosing $X$ make the classes flexible.}
However, in order to utilize these classes in a solver, we need explicit knowledge of $X$, which, in this paper, we learn form related samples, $A$ and $b_l$, $l=1,\dots$. During training, we do not know $X$ yet and need other mechanisms to circumvent the hardness of the problem. We do so by organizing the samples in a hierarchical curriculum tree with a progression from easy to harder problems.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: See comments to reviewers.
Assigned Action Editor: ~Jonathan_Scarlett1
Submission Number: 1247
Loading