Abstract: Many issues have drawn significant attention to gradient learning (GL), which seeks to approximate the gradient of the target function. Despite rapid progress, the existing methods on GL are almost based on the strict assumption that the samples are independent and identically distributed (i.i.d.) drawn. In this paper, we go beyond the classical i.i.d. framework and propose to investigate the GL under the independent covariate shift (i.c.s.) assumption. To be specific, we establish the upper bound of generalization error from the viewpoint of function approximation and show its theoretical consistency under a mild regularity condition on the bounded density-ratio, which generalizes the classical GL results under the i.i.d. framework. In addition, we have discovered a real-world example which meets the i.c.s. assumption. The numerical studies on the synthetic and real-world examples validate the effectiveness of proposed approach on the i.c.s. setting.
Loading