Keywords: Large Language Models, Stereotype Detection, Token-Level Explanations, Model Explainability, SHAP, LIME, Ethical AI, Responsible AI, Natural Language Processing
TL;DR: HEARTS enhances stereotype detection with explainable, low-carbon models fine-tuned on a diverse dataset, addressing LLMs' poor accuracy and the subjectivity of stereotypes.
Abstract: Stereotypes are generalised assumptions about societal groups, and even state-of-the-art LLMs using in-context learning struggle to identify them accurately. Due to the subjective nature of stereotypes, where what constitutes a stereotype can vary widely depending on cultural, social, and individual perspectives, robust explainability is crucial. Explainable models ensure that these nuanced judgments can be understood and validated by human users, promoting trust and accountability. We address these challenges by introducing HEARTS (Holistic Framework for Explainable, Sustainable, and Robust Text Stereotype Detection), a framework that enhances model performance, minimises carbon footprint, and provides transparent, interpretable explanations. We establish the Expanded Multi-Grain Stereotype Dataset (EMGSD), comprising 57,201 labelled texts across six groups, including under-represented demographics like LGBTQ+ and regional stereotypes. Ablation studies confirm that BERT models fine-tuned on EMGSD outperform those trained on individual components. We then analyse a fine-tuned, carbon-efficient ALBERT-V2 model using SHAP to generate token-level importance values, ensuring alignment with human understanding, and calculate explainability confidence scores by comparing SHAP and LIME outputs. An analysis of examples from the EMGSD test data indicates that when the ALBERT-V2 model predicts correctly, it assigns the highest importance to labelled stereotypical tokens. These correct predictions are also associated with higher explanation confidence scores compared to incorrect predictions. Finally, we apply the HEARTS framework to assess stereotypical bias in the outputs of 12 LLMs, using neutral prompts generated from the EMGSD test data to elicit 1,050 responses per model. This reveals a gradual reduction in bias over time within model families, with models from the LLaMA family appearing to exhibit the highest rates of bias.
Submission Number: 70
Loading