Learning Fair Representations: Mitigating Statistical Dependencies

Published: 2024, Last Modified: 17 May 2025HCI (52) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The social awareness around the possibility of machine learning algorithms making biased decisions has led to an increase in Responsible AI studies in recent years. Algorithmic fairness is one of the concepts that should be considered when designing responsible AI models. The goal of these studies is to ensure the decisions made by machine learning algorithms in automated decision-making systems are bias-free and not affected by sensitive information that may lead to discrimination and consequences for individuals. Learning a fair representation is an effective approach to mitigate algorithmic bias and has been successfully applied in this domain. The objective of these approaches is to create representations by removing sensitive information while retaining the non-sensitive information that is required. In this paper, we propose a novel fair representation framework to generate fair representation that can be easily adjusted for a range of downstream classification tasks. Our proposed algorithm integrates the \(\beta \)-VAE encoder with a classifier to extract meaningful features. Simultaneously, it leverages the Hilbert-Schmidt independence criterion [24] as a constraint to maintain statistical independence between the representations and the sensitive attribute. Experimental results on three benchmark datasets have demonstrated our model’s ability to create fair representations and yield a better fairness-accuracy tradeoff compared to state-of-the-art models.
Loading