Incentivizing honest performative predictions with proper scoring rulesDownload PDF

Published: 08 May 2023, Last Modified: 26 Jun 2023UAI 2023Readers: Everyone
Keywords: Oracle AI, proper scoring rules, self-fulfilling prophecy, performative prediction, AI safety
TL;DR: If reporting a prediction influences the world, proper scoring rules incentivize dishonesty; we analyze this problem and propose solutions.
Abstract: Proper scoring rules incentivize experts to accurately report beliefs, assuming predictions cannot influence outcomes. We relax this assumption and investigate incentives when predictions are performative, i.e., when they can influence the outcome of the prediction, such as when making public predictions about the stock market. We say a prediction is a fixed point if it accurately reflects the expert's beliefs after that prediction has been made. We show that in this setting, reports maximizing expected score generally do not reflect an expert's beliefs, and we give bounds on the inaccuracy of such reports. We show that, for binary predictions, if the influence of the expert's prediction on outcomes is bounded, it is possible to define scoring rules under which optimal reports are arbitrarily close to fixed points. However, this is impossible for predictions over more than two outcomes. We also perform numerical simulations in a toy setting, showing that our bounds are tight in some situations and that prediction error is often substantial (greater than 5-10\%). Lastly, we discuss alternative notions of optimality, including performative stability, and show that they incentivize reporting fixed points.
Supplementary Material: pdf
Other Supplementary Material: zip
0 Replies

Loading