Adversarial mitigation to reduce unwanted biases in machine learning. (Réduire les biais indésirables en apprentissage automatique par atténuation adverse)Download PDFOpen Website

Published: 01 Jan 2022, Last Modified: 21 Sept 2023undefined 2022Readers: Everyone
Abstract: The past few years have seen a dramatic rise of academic and societal interest in fair machine learning. As a result, significant work has been done to include fairness constraints in the training objective of machine learning algorithms. Its primary purpose is to ensure that model predictions do not depend on any sensitive attribute as gender or race, for example. Although this notion of independence is incontestable in a general context, it can theoretically be defined in many different ways depending on how one sees fairness. As a result, many recent papers tackle this challenge by using their "own" objectives and notions of fairness. Objectives can be categorized in two different families: Individual and Group fairness. This thesis gives an overview of the methodologies applied in these different families in order to encourage good practices. Then, we identify and complete gaps by presenting new metrics and new Fair-ML algorithms that are more appropriate for specific contexts.
0 Replies

Loading