Reproducibility Study of "FairViT: Fair Vision Transformer via Adaptive Masking"

TMLR Paper4283 Authors

21 Feb 2025 (modified: 02 Apr 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recently, Vision Transformers (ViTs) have excelled in computer vision tasks but often struggle with fairness issues related to attributes like gender and hair colour. FairViT by Tian et al. (2024), aims to address this challenge, by introducing adaptive masking combined with a distance-based loss, to improve fairness and accuracy, while maintaining competitive computational efficiency compared to other baseline methods. In our reproducibility study, we evaluated FairViT on the CelebA dataset on tasks related to attractiveness and facial expression prediction, while considering specific sensitive attributes. We then compared FairViT against the Vanilla and Fair Supervised Contrastive Loss (FSCL) baseline models. Contrary to the original claim regarding the effectiveness of adaptive masking, we observed that its impact is negligible, in terms of both fairness and accuracy, a finding confirmed also on the UTKFace dataset. On the other hand, the distance-based loss demonstrated partial effectiveness, but mainly when tested in the context of different architectures. Finally, in terms of computational efficiency, FairViT required almost double training time per epoch compared to the Vanilla model and did not outperform FSCL, which had the lowest training time for the specified dataset size used by the authors. Overall, our findings highlight the potential effectiveness of the proposed distance loss. However, the adaptive masking method did not deliver the expected improvements while also increasing the computational cost. Our implementation is available at: https://anonymous.4open.science/r/FairViT-reproducibility-study-54B0/.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: Anurag Arnab
Submission Number: 4283
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview