Keywords: Counterfactual prediction, conditional average treatment effects, measurement error, label noise
TL;DR: We develop counterfactual prediction methods designed for settings with measurement error in the target outcome.
Abstract: Growing work in algorithmic decision support proposes methods for combining predictive models with human judgment to improve decision quality. A challenge that arises in this setting is predicting the risk of a decision-relevant target outcome under multiple candidate actions. While counterfactual prediction techniques have been developed for these tasks, current approaches do not account for measurement error in observed labels. This is a key limitation because in many domains, observed labels (e.g., medical diagnoses, test scores) serve as a proxy for the target outcome of interest (e.g., biological medical outcomes, student learning). We develop a method for counterfactual prediction of target outcomes observed under treatment-conditional outcome measurement error (TC-OME). Our method minimizes risk with respect to target potential outcomes given access to observational data and estimates of measurement error parameters. We also develop a method for estimating error parameters in cases where these are unknown in advance. Through a synthetic evaluation, we show that our approach achieves performance parity with an oracle model when measurement error parameters are known and retains performance given moderate bias in error parameter estimates.