Person Detection Through the Lens of Algorithmic Bias

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: object detection, autonomous vehicles, algorithmic bias, algorithmic fairness, fairness in ML
Abstract: The rise of AI based person detection in safety critical applications such as driver-less cars or security monitoring has lead to an explosion of machine learning models and dataset research. At the same time, researchers have raised question of bias in these models and datasets. Popular benchmark datasets like More Inclusive Images for People (MIAP) and Berkeley DeepDrive (BDD) for person detection suffer from both sampling and labeling biases. This has serious implications for autonomous vehicles and other fields that use these datasets. We conduct an all-encompassing analysis to assess these datasets through the lens of algorithmic bias, looking at both dataset and model bias. To the best of our knowledge, no study has delved into the realm of person detection in low-quality or crowded pictures with this lens. The result is a novel analysis of bias in a real-world image dataset. We find that 1) image manipulations frequently found in real-world settings like image blurriness and 2) image detectors that are skewed to rely on features like contrast or brightness both have significant negative impacts on fairness for race, gender, and age demographics. These result can help guide future designs of robust models in the object detection field and beyond.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11058
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview