Towards Auditing Large Language Models: Improving Text-based Stereotype Detection

Published: 23 Oct 2023, Last Modified: 28 Nov 2023SoLaR PosterEveryoneRevisionsBibTeX
Keywords: Large Language Models (LLMs), Bias Auditing, Stereotype Classification, Multi-Grain Stereotype Dataset, DistilBERT, Fine-tuning, Explainable AI (XAI), Benchmarking, Societal Dimensions, GPT Series, Precision, Recall, F1 Score, Text-based Analysis, Ethical Concerns, Natural Language Processing (NLP), Real-world Consequences
TL;DR: A framework for auditing bias in LLMs using a new dataset. Multi-class training outperforms one-vs-all approaches, and explainable AI ensures transparency. The benchmark results observe reduced bias in GPT series over time.
Abstract: Large Language Models (LLM) have made significant advances in the recent past becoming more mainstream in Artificial Intelligence (AI) enabled human-facing applications. However, LLMs often generate stereotypical output, taking from their training data, amplifying societal biases and raising ethical concerns. This work introduces i) the Multi-Grain Stereotype Dataset, which includes 52,751 instances of gender, race, profession and religion stereotypic text and ii) a novel stereotype classifier for English text. We design several experiments to rigorously test the proposed model trained on the novel dataset. Our experiments show that training the model in a multi-class setting can outperform the one-vs-all binary counterpart. Consistent feature importance signals from different eXplainable AI tools demonstrate that the new model exploits relevant text features. We utilise the newly created model to assess the stereotypic behaviour of the popular GPT family of models and observe the reduction of bias over time. In summary, our work establishes a robust and practical framework for auditing and evaluating the stereotypic bias in LLMs.
Submission Number: 66
Loading