Keywords: Algorithmic fairness, Bayesian inference, Gibbs posterior
Abstract: With the growing importance of trustworthy AI, algorithmic fairness has emerged as a critical concern.
Among various fairness notions, group fairness - which measures the model bias between sensitive groups - has received significant attention.
While many group-fair models have focused on satisfying group fairness constraints, model uncertainty has received relatively little attention, despite its importance for robust and trustworthy decision-making.
To address this, we adopt a Bayesian framework to capture model uncertainty in fair model training.
We first define group-fair posterior distributions and then introduce a fair variational Bayesian inference.
Then we propose a novel distribution termed matched Gibbs posterior, as a proxy distribution for the fair variational Bayesian inference by employing a new group fairness measure, the matched deviation.
A notable feature of matched Gibbs posterior is that it approximates the posterior distribution well under the fairness constraint without requiring heavy computation.
Theoretically, we show that the matched deviation has a strong relation to existing group fairness measures, highlighting desirable fairness guarantees.
Computationally, by treating the matching function in the matched deviation as a learnable parameter, we develop an efficient MCMC algorithm.
Experiments on real-world datasets demonstrates that matched Gibbs posterior outperforms other methods in balancing uncertainty–fairness and utility–fairness trade-offs, while also offering additional desirable properties.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 18116
Loading