Keywords: Fairness measure, machine learning, multi-attribute protection
TL;DR: Does Machine Bring in Extra Bias in Learning? Approximating Discrimination Within Models Quickly
Abstract: Discrimination mitigation within machine learning (ML) models is complicated because multiple factors may interweave with each other including hierarchically and historically. Yet few existing fairness measures can capture the discrimination level within ML models when dealing with multiple sensitive attributes. To bridge this gap, we propose a fairness measure based on distances between sets from a manifold perspective, named ‘harmonic fairness measure via manifolds (HFM)’ with three optional versions, which can deal with a fine-grained discrimination evaluation for several sensitive attributes of binary/multiple values. To accelerate the computation of distances of sets, we further propose approximation algorithms for efficient bias evaluation. The empirical results demonstrate that our proposed fairness measure HFM is valid and the approximation algorithms are effective and efficient.
Is Neurips Submission: No
Submission Number: 47
Loading