Abstract: This paper presents a methodology for assessing demographic biases in AI-powered hiring systems and evaluates the existing bias mitigation techniques. We validate the developed methodology using a dataset of anonymized CVs and job descriptions, which contains samples in English and Ukrainian. Following the proposed methodology, we establish a framework to benchmark AI-assisted hiring systems, identifying potential biases across various protected groups. After detecting these biases, we test pre- and post-processing mitigation techniques to reduce bias levels. Our findings reveal that although some strategies showed positive outcomes, none completely resolved the bias issue in AI-assisted hiring. With this research, we aim to highlight the risks of using AI in the recruitment domain and encourage the use of responsible AI practices in high-risk areas.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: model bias/fairness evaluation, model bias/unfairness mitigation, ethical considerations in NLP applications
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings
Languages Studied: English, Ukrainian
Submission Number: 496
Loading