Keywords: neural network, deep learning, fairness, counterfactual, dataset, label bias
TL;DR: We propose a method that generates "counterfactual training datasets" to efficiently uncover fairness violations in neural networks across both training and inference.
Abstract: Interpreting the inference-time behavior of deep neural networks remains a challenging problem. Existing approaches to counterfactual explanation typically ask: What is the closest alternative $\textit{input}$ that would alter the model's prediction in a desired way?
In contrast, we explore $\textbf{counterfactual datasets}$. Rather than perturbing the input, our method efficiently finds the closest alternative $\textit{training dataset}$, one that differs from the original dataset by changing a few labels. Training a new model on this altered dataset can then lead to a different prediction of a given test instance.
This perspective provides a new way to assess fairness by directly analyzing the influence of label bias on training and inference. Our approach can be characterized as probing whether a given prediction depends on biased labels.
Since exhaustively enumerating all possible alternate datasets is infeasible, we develop analysis techniques that trace how bias in the training data may propagate through the learning algorithm to the trained network.
Our method heuristically ranks and modifies the labels of a bounded number of training examples to construct a counterfactual dataset, retrains the model, and checks whether its prediction on a chosen test case changes.
We evaluate our approach on feedforward neural networks across over 1100 test cases from 7 widely-used fairness datasets. Results show that it modifies only a small subset of training labels, highlighting its ability to pinpoint the critical training examples that drive prediction changes.
Finally, we demonstrate how counterfactual training datasets reveal connections between training examples and test cases, offering an interpretable way to probe dataset bias.
Serve As Reviewer: ~Jacqueline_Mitchell1
Submission Number: 10
Loading