Abstract: Automating decision systems has led to hidden biases in the use of artificial intelligence (AI). Consequently,
explaining these decisions and identifying responsibilities has become a challenge. As a result, a new field of
research on algorithmic fairness has emerged. In this area, detecting biases and mitigating them is essential to
ensure fair and discrimination-free decisions. This paper contributes with: (1) a categorization of biases and
how these are associated with different phases of an AI model’s development (including the data-generation
phase); (2) a revision of fairness metrics to audit the data and AI models trained with them (considering
agnostic models when focusing on fairness); and, (3) a novel taxonomy of the procedures to mitigate biases
in the different phases of an AI model’s development (pre-processing, training, and post-processing) with the
addition of transversal actions that help to produce fairer models.
Loading