Abstract: Bias towards people experiencing homelessness (PEH) is prevalent in online spaces. I leverage natural language processing (NLP) and large language models (LLMs) to identify, classify, and measure bias using geolocalized data collected from X (formerly Twitter), Reddit, meeting minutes, and news articles across the United States. The results of the study aim to provide a new path to alleviate homelessness by unveiling the intersectional bias that affects PEH. My research delivers a lexicon on homelessness, compiles an annotated database for homelessness and homelessness-racism intersectional (HRI) bias, evaluates LLMs as classifiers against these biases, and audits existing LLMs on HRI. My goal is to contribute to homelessness alleviation by counteracting social stigma and restoring the human dignity of the persons affected.
External IDs:dblp:conf/ijcai/Karr25
Loading