Active Slice Discovery in Large Language Models

Published: 29 Sept 2025, Last Modified: 23 Oct 2025NeurIPS 2025 - Reliable ML WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Active Learning, Slice Discovery, Large Language Models
TL;DR: We explore how to adaptively discovery systematic model errors from limited supervision via active learning
Abstract: Large Language Models (LLMs) often exhibit systematic errors on specific subsets of data, known as \textit{error slices}. For instance, a slice can correspond to a certain demographic, where a model does poorly in identifying toxic comments regarding that demographic. Identifying error slices is crucial to understanding and improving models, but it is also challenging. An appealing approach to reduce the amount of manual annotation required is to actively group errors that are likely to belong to the same slice, while using limited access to an annotator to verify whether the chosen samples share the same pattern of model mistake. In this paper, we formalize this approach as \emph{Active Slice Discovery} and explore it empirically on a problem of discovering human-defined slices in toxicity classification. We examine the efficacy of active slice discovery under different choices of feature representations and active learning algorithms. On several slices, we find that uncertainty-based active learning algorithms are most effective, achieving competitive accuracy using 2-10\% of the available slice membership information, while significantly outperforming baselines.
Submission Number: 86
Loading