Sparse Kernel Regression with Coefficient-based lq−regularizationDownload PDFOpen Website

09 Sept 2020OpenReview Archive Direct UploadReaders: Everyone
Abstract: In this paper, we consider the lq− regularized kernel regression with 0< q≤ 1. In form, the algorithm minimizes a least-square loss functional adding a coefficient-based lq− penalty term over a linear span of features generated by a kernel function. We study the asymptotic behavior of the algorithm under the framework of learning theory. The contribution of this paper is two-fold. First, we derive a tight bound on the l2− empirical covering numbers of the related function space involved in the error analysis. Based on this result, we obtain the convergence rates for the l1− regularized kernel regression which is the best so far. Second, for the case 0< q< 1, we show that the regularization parameter plays a role as a trade-off between sparsity and convergence rates. Under some mild conditions, the fraction of non-zero coefficients in a local minimizer of the algorithm will tend to 0 at a polynomial decay rate when the sample size m becomes large. As the concerned algorithm is non-convex, we also discuss how to generate a minimizing sequence iteratively, which can help us to search a local minimizer around any initial point.
0 Replies

Loading