Gone With the Bits: Benchmarking Bias in Facial Phenotype Degradation Under Low-Rate Neural Compression

Published: 28 Jun 2024, Last Modified: 25 Jul 2024NextGenAISafety 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fairness, Bias, Neural Compression, Phenotype Classification
TL;DR: This study reveals that neural image compression distorts facial phenotypes, with significant bias across racial groups, and that using a racially balanced dataset does not mitigate this issue.
Abstract: In this study, we investigate how facial phenotypes are distorted under neural image compression and the disparity of this distortion across racial groups. Neural compression methods are gaining popularity due to their impressive rate-distortion performance and their ability to compress to extremely small bitrates, below 0.1 bits per pixel (bpp). As deep learning architectures, these models are prone to bias during the training process, leading to unfair outcomes for individuals in different groups. We first demonstrate, by benchmarking five popular neural compression algorithms, that compressing facial images to low bitrate regimes leads to the degradation of specific phenotypes (e.g. skin type). Next, we highlight the bias in this phenotype degradation across different race groups. We then show that leveraging a racially balanced dataset does not help mitigate this bias. Finally, we examine the relationship between bias and realism of reconstructed images at different bitrates.
Submission Number: 135
Loading