Abstract: Spectral algorithms leverage spectral regularization techniques to analyze and process data, providing a flexible framework for addressing supervised learning problems. To deepen our understanding of their performance in real-world scenarios where the distributions of training and test data may differ, we conduct a rigorous investigation into the convergence behavior of spectral algorithms under covariate shift. In this setting, the marginal distributions of the input data differ between the training and test datasets, while the conditional distribution of the output given the input remains unchanged. Within a non-parametric regression framework over a reproducing kernel Hilbert space, we analyze the convergence rates of spectral algorithms under covariate shift and show that they achieve minimax optimality when the density ratios between the training and test distributions are uniformly bounded. However, when these density ratios are unbounded, the spectral algorithms may become suboptimal. To address this issue, we propose a novel weighted spectral algorithm with normalized weights that incorporates density ratio information into the learning process. Our theoretical analysis shows that this normalized weighted approach achieves optimal capacity-independent convergence rates, but the rates will suffer from the saturation phenomenon. Furthermore, by introducing a weight clipping technique, we demonstrate that the convergence rates of the weighted spectral algorithm with clipped weights can approach the optimal capacity-dependent convergence rates arbitrarily closely. This improvement resolves the suboptimality issue in unbounded density ratio scenarios and advances the state-of-the-art by refining existing theoretical results.
External IDs:dblp:journals/corr/abs-2504-12625
Loading