Abstract: With the integration of artificial intelligence into real-world decision-support systems, there is increasing interest in tools that facilitate the identification of potential biases and fairness-related harms of machine learning models. While existing toolkits provide approaches to evaluate harms associated with discrete predicted outcomes, the assessment of disparities in epistemic value provided by continuous risk scores is relatively underexplored. Additionally, relatively few works focus on identifying the biases at the root of the harm. In this work, we present a visual analytics “Subgroup Harm Assessor” tool that allows users to: (1) identify disparities in the epistemic value of risk-scoring models via subgroup discovery of disparities in model log loss, (2) evaluate the extent to which the disparity might be caused by disparities in the informativeness of features via SHapley Additive exPlanations (SHAP) of model loss.
External IDs:doi:10.1007/978-3-031-70371-3_31
Loading