$Addressing Racial Bias in AI-Driven Photo Restoration: Enhancing Fidelity for African Facial Features$
Keywords: AI bias, photo restoration, racial equity, generative models, computer vision
Abstract: AI-powered photo restoration tools have made it easier to preserve historical images by repairing damage, improving resolution, and adding color to grayscale photographs. However, these models often show racial bias, altering African facial features like skin tones, textures, and bone structures to fit Eurocentric standards. This paper examines these differences in common generative models, linking them to the lack of African representation in training sets like FFHQ and CelebA, where diverse ethnic groups make up less than 10% of the samples. We suggest a fine-tuning process that uses a curated dataset of 5,000 public domain African heritage images, enhanced with synthetic damage (such as scratches and fading) through adversarial training. Our method uses StyleGAN3 with perceptual loss functions tailored for ethnic-specific traits to ensure cultural fidelity.
The approach incorporates edge-detection modules to keep facial shapes intact and a custom loss function that combines FID and SSIM for quantitative assessment. Tests on a sample of 500 damaged photos show that baseline models change African features in 72% of cases (for instance, lightening skin tones by 15-25% in HSV space). In contrast, our fine-tuned model reduces this to 18%, improving FID from 0.42 to 0.21 and SSIM from 0.78 to 0.92.
This work emphasizes the importance of inclusive datasets in computer vision and provides a guide for fair AI in imaging. Future directions include integrating multiple methods for video restoration and creating ethical guidelines for dataset curation.
Submission Number: 4
Loading