Nonconvex Theory of $M$-estimators with Decomposable Regularizers

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: High-dimensional inference addresses scenarios where the dimension of the data approaches, or even surpasses, the sample size. In these settings, the regularized $M$-estimator is a common technique for inferring parameters. (Negahban et al., 2009) establish a unified framework for establishing convergence rates in the context of high-dimensional scaling, demonstrating that estimation errors are confined within a restricted set, and revealing fast convergence rates. The key assumption underlying their work is the convexity of the loss function. However, many loss functions in high-dimensional contexts are nonconvex. This leads to the question: if the loss function is nonconvex, do estimation errors still fall within a restricted set? If yes, can we recover convergence rates of the estimation error under nonconvex situations? This paper provides affirmative answers to these critical questions.
Lay Summary: We analyze the convergence properties of nonconvex loss functions in high-dimensional settings. Our findings indicate that, under mild assumptions, the estimation error convergence rates for nonconvex loss functions match those of convex loss functions. This result bridges the gap between theoretical understanding and practical applications of nonconvex optimization methods in high-dimensional statistical estimation.
Primary Area: Theory->Learning Theory
Keywords: $M$-estimators, Nonconvex Theory
Submission Number: 1728
Loading