Exploring Large Language Models for Bias Mitigation and Fairness

Published: 10 Jun 2024, Last Modified: 20 Jun 2024IJCAI 2024 Workshop AIGOVEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Responsible AI, Large Language Models (LLMs), Bias and fairness in AI
TL;DR: In this paper, we present and discuss innovative approaches that incorporate LLMs for mitigating bias and ensuring fairness in AI systems, while keeping humans in the loop.
Abstract: With the increasing integration of Artificial Intelligence (AI) in various applications, concerns about fairness and bias have become paramount. While numerous strategies have been proposed to mitigate bias, there is a significant gap in the literature regarding the use of Large Language Models (LLMs) in these techniques. This paper aims to bridge this gap by presenting innovative approaches that incorporate LLMs for bias mitigation and fairness in AI systems. Our proposed methods, built on previous research, are designed to be model and system-agnostic, while keeping humans in the loop. We envision these approaches to foster trust between AI developers and end-users/stakeholders, contributing to the discourse on responsible AI.
Submission Number: 2
Loading