Abstract: Social media platforms have been the subject of controversy
and scrutiny due to the spread of hateful content. To address
this problem, the platforms implement content moderation using a mix of human and algorithmic processes. However, content moderation itself has lead to further accusations against
the platforms of political bias. In this study, we investigate
how channel partisanship and video misinformation affect
the likelihood of comment moderation on YouTube. Using
a dataset of 84,068 comments on 258 videos, we find that
although comments on right-leaning videos are more heavily moderated from a correlational perspective, we find no
evidence to support claims of political bias when using a
causal model that controls for common confounders (e.g., hate
speech). Additionally, we find that comments are more likely
to be moderated if the video channel is ideologically extreme,
if the video content is false, and if the comments were posted
after a fact-check.
0 Replies
Loading