What is Behind Homelessness Bias? Using LLMs and NLP to Mitigate Homelessness by Acting on Social Stigma

Published: 01 Jan 2025, Last Modified: 07 Oct 2025IJCAI 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Bias towards people experiencing homelessness (PEH) is prevalent in online spaces. This project will leverage natural language processing (NLP) and large language models (LLMs) to identify, classify, and measure bias using geolocalized data collected from X (formerly Twitter), Reddit, meeting minutes, and news media across the United States. While public opinion often refers to addictions, criminality, and high levels of welfare spending to justify bias against PEH, we will conduct a comparative study to determine whether racial fractionalization is associated with homelessness bias. The results of the study aim to provide a new path to alleviate homelessness by unveiling the intersectional bias that affects PEH and minority racial groups. During the course of the project, we will deliver a lexicon, compile an annotated database for homelessness and homelessness-racism intersectional (HRI) bias, evaluate LLMs as classifiers of homelessness and HRI bias, develop homelessness and HRI bias metrics, and audit existing LLMs on HRI. In collaboration with non-profits and the city council of South Bend, Indiana, USA, our ultimate goal is to contribute to homelessness alleviation by counteracting social stigma, restoring the dignity and well-being of the persons affected.
Loading