Rethinking Hard Thresholding Pursuit: Full Adaptation and Sharp Estimation

Published: 2025, Last Modified: 08 Jan 2026IEEE Trans. Inf. Theory 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Hard Thresholding Pursuit (HTP) has aroused increasing attention for its robust theoretical guarantees and impressive numerical performance in non-convex optimization. This paper consider a high-dimensional linear regression model with n observations, p predictors, and an unknown $s^{*}$ -sparse signal $\boldsymbol {\beta }^{*} \in \mathbb {R}^{p}$ corrupted by noise of magnitude $\sigma $ . We introduce a novel tuning-free procedure, namely Full-Adaptive HTP (FAHTP), that simultaneously adapts to both the unknown sparsity and signal strength of the underlying model. Our theoretical analysis rigorously characterizes the iterative thresholding dynamics of FAHTP, offering refined theoretical insights. In specific, under the beta-min condition $\min _{ \{i: {\boldsymbol {\beta }}^{*}_{i}\ne 0\} } |{\boldsymbol {\beta }}^{*}_{i}| \ge C\sigma (\log p/n)^{1/2}$ , FAHTP achieves oracle estimation rate $\sigma (s^{*}/n)^{1/2}$ , highlighting its theoretical superiority over convex competitors such as LASSO and SLOPE, and recovers the true support set exactly. More importantly, even without the beta-min condition, FAHTP achieves a tighter error bound than the classical minimax rate with high probability. The comprehensive numerical experiments substantiate our theoretical findings, underscoring the effectiveness and robustness of the proposed FAHTP.
Loading