Abstract: We propose and explore the possibility that language models can be studied as effective proxies for
specific human sub-populations in social science research. Practical and research applications of artificial
intelligence tools have sometimes been limited by problematic biases (such as racism or sexism), which
are often treated as uniform properties of the models. We show that the “algorithmic bias” within one
such tool– the GPT-3 language model– is instead both fine-grained and demographically correlated,
meaning that proper conditioning will cause it to accurately emulate response distributions from a wide
variety of human subgroups. We term this property algorithmic fidelity and explore its extent in GPT-3.
We create “silicon samples” by conditioning the model on thousands of socio-demographic backstories
from real human participants in multiple large surveys conducted in the United States. We then compare
the silicon and human samples to demonstrate that the information contained in GPT-3 goes far beyond
surface similarity. It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes,
and socio-cultural context that characterize human attitudes. We suggest that language models with
sufficient algorithmic fidelity thus constitute a novel and powerful tool to advance understanding of
humans and society across a variety of disciplines.
0 Replies
Loading