Regression for the Mean: Auto-Evaluation and Inference with Few Labels through Post-hoc Regression

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The availability of machine learning systems that can effectively perform arbitrary tasks has led to synthetic labels from these systems being used in applications of statistical inference, such as data analysis or model evaluation. The Prediction Powered Inference (PPI) framework provides a way of leveraging both a large pool of pseudo-labelled data and a small sample with real, high-quality labels to produce a low-variance, unbiased estimate of the quantity being evaluated for. Most work on PPI considers a relatively sizable set of labelled samples, which can be resource intensive to obtain. However, we find that when labelled data is scarce, the PPI++ method can perform even worse than classical inference. We analyze this phenomenon by relating PPI++ to ordinary least squares regression, which also experiences high variance with small sample sizes, and use this regression framework to better understand the efficacy of PPI. Motivated by this, we present two new PPI-based techniques that leverage robust regressors to produce even lower variance estimators in the few-label regime
Lay Summary: Researchers often investigate problems and reach conclusions by analyzing large amounts of data. The process of generating insights from these data is known as statistical inference. Statistical inference needs a lot of data in order to be reliable, and collecting a lot of data can put a large burden on researchers. Recently, machine learning (ML) models have become so effective at answering questions that they have been used in place of humans for collecting data to save resources. Existing research has put forward ways to combine this ML labelled data with the human labelled data so that statistical inference with these combined data is both reliable and free of potential bias from the ML model. We found, however, that these existing techniques perform poorly when very little human labelled data is available. By relating this technique to a classic problem from statistics, we create two new modifications to this technique that allow for improved statistical inference even when there is not much human labelled data available.
Primary Area: General Machine Learning->Evaluation
Keywords: auto-evaluation, prediction-powered-inference, regression, variance-reduction
Submission Number: 12908
Loading