Individual Fairness in Bayesian Neural NetworksDownload PDF

Published: 20 Jun 2023, Last Modified: 18 Jul 2023AABI 2023Readers: Everyone
Keywords: Bayesian Neural Networks, Individual Fairness, Adversarial Attacks, Approximate Inference
TL;DR: We investigate individual fairness in Bayesian Neural Networks trained with approximate Bayesian inference, developing statistical and gradient-based techniques and compare against determinitic Neural Netowrks.
Abstract: We study Individual Fairness (IF) for Bayesian neural networks (BNNs). Specifically, we consider the $\epsilon$-$\delta$-individual fairness notion, which requires that, for any pair of input points that are $\epsilon$-similar according to a given similarity metrics, the output of the BNN is within a given tolerance $\delta>0.$ We leverage bounds on statistical sampling over the input space and the relationship between adversarial robustness and individual fairness to derive a framework for the systematic estimation of $\epsilon$-$\delta$-IF, designing Fair-FGSM and Fair-PGD as global, fairness-aware extensions to gradient-based attacks for BNNs. We empirically study IF of a variety of approximately inferred BNNs with different architectures on fairness benchmarks, and compare against deterministic models learnt using frequentist techniques. Interestingly, we find that BNNs trained by means of approximate Bayesian inference consistently tend to be markedly more individually fair than their deterministic counterparts.
0 Replies

Loading