Abstract: We present a method for identifying groups of test examples—slices—on which
a pre-trained model under-performs, a task now known as slice discovery. We
formalize coherence, a requirement that erroneous predictions within returned
slices should be wrong for the same reason, as a key property that a slice discovery
method should satisfy. We then leverage influence functions (Koh & Liang, 2017)
to derive a new slice discovery method, InfEmbed, which satisfies coherence
by returning slices whose examples are influenced similarly by the training data.
InfEmbed is computationally simple, consisting of applying K-Means clustering
to a novel representation we deem influence embeddings. Empirically, we show
InfEmbed outperforms current state-of-the-art methods on a slice discovery
benchmark, and is effective for model debugging across several case studies.
0 Replies
Loading