Unveiling Gender Bias: Transformer Models and Explainability Techniques in Dutch Job Ad Analysis

Published: 15 Oct 2025, Last Modified: 31 Oct 2025BNAIC/BeNeLearn 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Type A (Regular Papers)
Keywords: Gender bias, Transformer models, SHAP, LIME, Fairness, DutchNLP
Abstract: Explicit bias in job advertisements perpetuates systemic discrimination, challenging fairness and equity in labor markets. The contributions of this paper are: 1) the evaluation of the effectiveness of transformer-based Natural Language Processing (NLP) in detecting explicit gender bias in Dutch job postings; 2) the interpretability of these models to uncover discriminatory terms and contextual biases. Our experiments compare monolingual models (BERTje, RobBERT) and a multilingual model (XLM-RoBERTa), as well as Shapley Additive Explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). Results show that RobBERT leverages Dutch linguistic nuances effectively, achieving balanced performance, while XLM-RoBERTa demonstrates superior precision-recall balance and achieves the highest AP score. Learning curve analysis highlights the importance of larger datasets for improved model generalization, and SHAP and LIME analyses reveal critical linguistic features driving predictions, emphasizing transparency in bias detection systems. By addressing gaps in explicit bias detection and advancing transparency, this research not only shows novel insights into transformer-based models and scalability, but also supports societal efforts to foster equitable hiring practices.
Serve As Reviewer: ~Maaike_H.T._De_Boer1
Submission Number: 57
Loading