Learning fair latent representation with Multi-Task Deep Learning

ICLR 2026 Conference Submission17667 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-task Deep Learning, Auxiliary Task, Fairness, Multi-Objective Optimisation
Abstract: The problem of group-level fairness in machine learning has received increasing attention due to its critical role in ensuring the reliability and trustworthiness of models deployed in sensitive domains. Mainstream approaches typically incorporate fairness by enforcing constraints directly within the training objective. However, treating fairness solely as a regularisation term can lead to suboptimal tradeoffs with loss of accuracy or insufficient fairness guarantees. In this work, we propose a novel approach that formulates fairness as an auxiliary task in a Multi-Task Learning (MTL) paradigm. In contrast to embedding fairness constraints into a single-task objective, explicitly modelling the problem as multi-objective optimisation (MOO) allows to decouple the learning of a fair internal representation from the optimisation of the predictive task: these two conflicting objectives are optimised concurrently. We introduce two novel fairness loss functions that are better tailored to an MTL approach. We provide a theoretical analysis of the generalisation properties of the proposed approach. The experimental analysis on benchmark datasets shows that in spite of not embedding a fairness loss function directly on the predictive task the MTL formulation consistently improves group-level fairness metrics compared to both standard regularisation-based methods and other MTL architectures, while maintaining competitive predictive performance
Supplementary Material: pdf
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 17667
Loading