Abstract: In high-dimensional sparse regression, the \textsc{Lasso} estimator offers excellent theoretical guarantees but is well-known to produce biased estimates. To address this, \cite{Javanmard2014} introduced a method to ``debias'' the \textsc{Lasso} estimates for a random sub-Gaussian sensing matrix $\boldsymbol{A}$. Their approach relies on computing an ``approximate inverse'' $\boldsymbol{M}$ of the matrix $\boldsymbol{A}^\top \boldsymbol{A}/n$ by solving a convex optimization problem. This matrix $\boldsymbol{M}$ plays a critical role in mitigating bias and allowing for construction of confidence intervals using the debiased \textsc{Lasso} estimates. However the computation of $\boldsymbol{M}$ is expensive in practice as it requires iterative optimization.
In the presented work, we re-parameterize the optimization problem to compute a ``debiasing matrix'' $\boldsymbol{W} := \boldsymbol{AM}^{\top}$ directly, rather than the approximate inverse $\boldsymbol{M}$. This reformulation retains the theoretical guarantees of the debiased \textsc{Lasso} estimates, as they depend on the \emph{product} $\boldsymbol{AM}^{\top}$ rather than on $\boldsymbol{M}$ alone. Notably, we derive a simple and computationally efficient closed-form expression for $\boldsymbol{W}$, applicable to the sensing matrix $\boldsymbol{A}$ in the original debiasing framework, under a specific deterministic condition.
This condition is satisfied with high probability for a wide class of randomly generated sensing matrices.
Also, the optimization problem based on $\boldsymbol{W}$ guarantees a unique optimal solution, unlike the original formulation based on $\boldsymbol{M}$. We verify our main result with numerical simulations.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: In response to the reviewers remarks, we have made the following additions to the manuscript, all of which are highlighted in blue font in the uploaded PDF:
Reviewer fghi
1) Some clarifications regarding the choice of $\mu$ in the algorithm (remark 3 after theorem 2)
2) Experiments on compressive reconstruction of large images and videos, to address the concern regarding scalability (see section 6). Compared to the state of the art, these results are unique in that they provide an estimate of the uncertainty in every pixel of the reconstructed images.
3) Some remarks regarding comparison with other debiasing methods (Remark 1 after theorem 1)
Reviewer 1iZg
1) A literature review of recent papers that work on some form of LASSO debiasing (section 4)
2) To address the concern that there were no results on real data, we have now conducted experiments for compressive image reconstruction for three different popular hardware architectures: Rice single pixel camera, coded snapshot video acquisition and coded snapshot hyperspectral image acquisition (see section 6). These results show that the approach is easily scalable to large, practical datasets. Compared to the state of the art, these results are unique in that they provide an estimate of the uncertainty in every pixel of the reconstructed images.
3) Some clarifications regarding the choice of $\mu$ in the optimization algorithm (remark 3 after theorem 2)
Reviewer sNBF
1) A sketch of the proof of theorem 1 is added to the main body. The detailed proof is still there in the appendix.
2) Some clarifications regarding the theorems (See remark 3 after theorem 2)
3) We have added a set of experiments to compare the proposed method and the baseline in terms
of both accuracy and time under different $\mu$ -- see last paragraph of section 5.3
Assigned Action Editor: ~Jie_Shen6
Submission Number: 6037
Loading