Abstract: The use of generative large language models (LLMs) has been spiraling in recent years. Thanks to improvements in training regimes, these models produce fluent text and interact with humans in an unprecedented way. Consequently, researchers have begun investigating the "cognitive" abilities and biases of LLMs. One cognitive bias which is particularly interesting for interaction is homophily. In this work, we analyze two popular models (Llama 3.2 3B and GPT-4o Mini) to assess the degree of homophily across nine different human attributes, accounting for two other cognitive biases, namely framing and order bias. Our findings suggest that, while Llama 3.2 3B exhibit traces of framing and order bias, GPT-4o Mini exhibits homophilic bias, particularly with respect to political view and personality type. This has significant implications for echo chambers, disinformation dissemination, and social polarization in AI systems that utilize LLMs. Our results highlight the need for rigorous investigations into homophily to ensure responsible AI deployment.
Paper Type: Short
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Cognitive Bias, Homophily, LLM, Framing Bias, Order Bias
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 7409
Loading