Ano-Face-Fair: Race-Fair Face Anonymization in Text-to-Image Synthesis using Simple Preference Optimization in Diffusion Model
Keywords: Face Anonymization, Diffusion Model, Race Fairness, Face Editing, Multi-modal prompting, Human Preference Opimization
TL;DR: "Ano-Face-Fair" introduces a face anonymization method using diffusion models. It preserves key features, ensures racial fairness, and enhances image quality through Simple Preference Optimization.
Abstract: Face anonymization requires effectively hiding identities while preserving essential features, yet existing models often show racial bias, particularly in representing Asian faces. We propose "Ano-Face-Fair" an approach for race-fair face anonymization based on Stable Diffusion-v2 Inpainting with three key contributions: (1) Focused Feature Enhancement (FFE) loss $\boldsymbol{L_{FFE}}$, for detailed facial feature generation across diverse racial groups, (2) Difference (DIFF) loss $\boldsymbol{L_{DIFF}}$, to prevent catastrophic forgetting by maintaining distinct racial characteristics, and (3) Simple Preference Optimization ($\mathbf{SimPO}$) for enhanced synthetic image consistency. Our method enables flexible control through both mask and text-based prompting, achieving robust anonymization while maintaining high image quality and accuracy in Asian face generation. We validate our method's effectiveness through extensive experiments on facial image generation across diverse racial groups. This work advances face anonymization by addressing racial biases in image generation, demonstrating robust and realistic face editing across diverse racial groups through mask and text-based prompting, thus contributing to more ethical generative model. Code: https://github.com/i3n7g3/Ano-Face-Fair
Submission Number: 1
Loading