FairSHAP: Preprocessing for Fairness Through Attribution-Based Data Augmentation

Published: 22 Sept 2025, Last Modified: 22 Sept 2025WiML @ NeurIPS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fairness, Shapley value, Feature Attribution, Data Augmentation
Abstract: Ensuring fairness in machine learning (ML) models is critical, particularly in high-stakes domains where biased decisions can lead to serious societal consequences. Existing preprocessing approaches generally lack transparent mechanisms for identifying which features or instances are responsible for unfairness. This obscures the rationale behind data modifications. We introduce FairSHAP, a novel preprocessing framework that leverages Shapley value attribution to improve both individual and group fairness. FairSHAP identifies fairness-critical instances in the training data using an interpretable measure of feature importance, and systematically modifies them through instance-level matching across sensitive groups. This process, described in Figure 1, reduces discriminative risk (DR)---an individual fairness metric---while preserving data integrity and model accuracy. FairSHAP bridges explainability and fairness by connecting Shapley values with DR, and is further supported by both theoretical proofs and empirical evidence, showing that improving individual fairness can also improve group fairness. To be specific, we demonstrate that FairSHAP significantly improves demographic parity and equality of opportunity across diverse tabular datasets, achieving fairness gains with minimal data perturbation and, in some cases, improved predictive performance. Moreover, as a model-agnostic and transparent method, FairSHAP is broadly applicable to tabular data, supports various models and SHAP algorithms, can be seamlessly integrated into existing ML pipelines, achieves comparable or superior fairness with significantly less data modification than benchmark methods, and provides actionable insights into the sources of bias.
Submission Number: 146
Loading