Where Does My Model Underperform? A Human Evaluation of Slice Discovery Algorithms

ICML 2023 Workshop SCIS Submission78 Authors

Published: 20 Jun 2023, Last Modified: 28 Jul 2023SCIS 2023 OralEveryoneRevisions
Keywords: model debugging, model evaluation, slice discovery, error discovery, error analysis, behavioral testing, group robustness
TL;DR: A controlled user study evaluation of automated tools that discover underperforming subgroups, absent supervision or labels. ("methods for discovering spurious correlations")
Abstract: Machine learning (ML) models that achieve high average accuracy can still underperform on semantically coherent subsets ("slices") of data. This behavior can have significant societal consequences for the safety or bias of the model in deployment, but identifying these underperforming slices can be difficult in practice, especially in domains where practitioners lack access to group annotations to define coherent subsets of their data. Motivated by these challenges, ML researchers have developed new slice discovery algorithms that aim to group together coherent and high-error subsets of data. However, there has been little evaluation focused on whether these tools help humans form correct hypotheses about where (for which groups) their model underperforms. We conduct a controlled user study (N = 15) where we show 40 slices output by two state-of-the-art slice discovery algorithms to users, and ask them to form hypotheses about an object detection model. Our results provide positive evidence that these tools provide benefit over a naive baseline, and challenge dominant assumptions shared by past work.
Submission Number: 78
Loading