Keywords: Fairness in NLP, Graph-based representation, Social bias, Information extraction, Survey
Abstract: As large language models (LLMs) increasingly permeate diverse domains, concerns about their trustworthiness and fairness have become central ethical issues. Since social bias is highly context-dependent, understanding relational structure beyond isolated text is crucial. Graph-based representations capture structural relationships within and across texts and have recently been integrated with LLMs to address fairness with promising results. However, there is still no comprehensive review of how graphs and LLMs are jointly used for fairness. To address this gap, we provide a systematic analysis of graph-based approaches to fairness and their integration with LLMs, and outline key future research directions.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Ethics, Bias, and Fairness, Information Extraction, NLP for Social Good, Interpretability and Analysis of Models for NLP, Machine Learning for NLP
Contribution Types: Surveys
Languages Studied: English
Submission Number: 6070
Loading