Abstract: Conformal prediction has received tremendous attention in recent years and has offered new solu- tions to problems in missing data and causal inference; yet these advances have not leveraged modern semiparametric efficiency theory for more robust and efficient uncertainty quantification. In this paper, we consider the problem of obtaining distribution-free prediction regions accounting for a shift in the distribution of the covariates between the training and test data. Under an explainable covariate shift assumption analogous to the standard missing at random assumption, we propose three variants of a general framework to construct well-calibrated prediction regions for the unobserved outcome in the test sample. Our approach is based on the efficient influence function for the quantile of the unobserved outcome in the test population combined with an arbitrary machine learning prediction algorithm, with- out compromising asymptotic coverage. We establish that the resulting prediction sets eventually attain nominal coverage in large samples. This guarantee is a consequence of the product bias form of our pro- posal which implies correct coverage if either the propensity score or the conditional distribution of the response is estimated sufficiently well. Our results also provide a framework for construction of doubly robust prediction sets of individual treatment effects, under the unconfoundedness condition. We further discuss aggregation of prediction sets from different machine learning algorithms for optimal prediction and illustrate the performance of our methods in both synthetic and real data. Finally, inspired by sensitivity analysis in missing data, we briefly discuss how our proposal could be extended to account for departures from the explainable covariate shift setting.
0 Replies
Loading