Abstract: Social media platforms often assume that users can self-correct against misinformation. However,
social media users are not equally susceptible to all misinformation as their biases influence what
types of misinformation might thrive and who might be at risk. We call “diverse misinformation” the
complex relationships between human biases and demographics represented in misinformation. To
investigate how users’ biases impact their susceptibility and their ability to correct each other, we
analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case
study for three reasons: (1) their classification as misinformation is more objective; (2) we can control
the demographics of the personas presented; (3) deepfakes are a real-world concern with associated
harms that must be better understood. Our paper presents an observational survey (N = 2016) where
participants are exposed to videos and asked questions about their attributes, not knowing some
might be deepfakes. Our analysis investigates the extent to which different users are duped and which
perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by
demographics, and participants are generally better at classifying videos that match them. We
extrapolate from these results to understand the potential population-level impacts of these biases
using a mathematical model of the interplay between diverse misinformation and crowd correction.
Our model suggests that diverse contacts might provide “herd correction” where friends can protect
each other. Altogether, human biases and the attributes of misinformation matter greatly, but having a
diverse social group may help reduce susceptibility to misinformation.
Loading