Keywords: fairness, evaluation, multi-task, transfer
TL;DR: We introduce a method for disaggregated evaluation that improves accuracy when assessing subpopulation performance disparities in ML models.
Abstract: Disaggregated evaluation—estimation of performance of a machine learning model on different subpopulations—is a core task when assessing performance and group-fairness of AI systems.
A key challenge is that evaluation data is scarce, and subpopulations arising from intersections of attributes (e.g., race, sex, age) are often tiny.
Today, it is common for multiple clients to procure the same AI model from a model developer, and the task of disaggregated evaluation is faced by each customer individually. This gives rise to what we call the *multi-task disaggregated evaluation problem*, wherein multiple clients seek to conduct a disaggregated evaluation of a given model in their own data setting (task). In this work we develop a disaggregated evaluation method called **SureMap** that has high estimation accuracy for both multi-task *and* single-task disaggregated evaluations of blackbox models. SureMap's efficiency gains come from
(1) transforming the problem into structured simultaneous Gaussian mean estimation and (2) incorporating external data, e.g., from the AI system creator or from their other clients. Our method combines *maximum a posteriori* (MAP) estimation using a well-chosen prior together with cross-validation-free tuning via Stein's unbiased risk estimate (SURE).
We evaluate SureMap on disaggregated evaluation tasks in multiple domains, observing significant accuracy improvements over several strong competitors.
Supplementary Material: zip
Primary Area: Fairness
Submission Number: 4572
Loading