Abstract: Two commonly-employed strategies to combat the rise of misinformation on social media are (i) fact-checking by professional organisations and (ii) community moderation by platform users. Policy changes by Twitter/X and, more recently, Meta, signal a shift away from partnerships with fact-checking organisations and towards an increased reliance on crowdsourced community notes. However, the extent and nature of dependencies between fact-checking and \emph{helpful} community notes remain unclear. To address these questions, we use language models to annotate a large corpus of Twitter/X community notes with attributes such as topic, cited sources, and whether they refute claims tied to broader misinformation narratives. Our analysis reveals that community notes cite fact-checking sources up to five times more than previously reported. Fact-checking is especially crucial for notes on posts linked to broader narratives, which are \textit{twice} as likely to reference fact-checking sources compared to other sources. In conclusion, our results show that successful community moderation heavily relies on professional fact-checking.
Paper Type: Short
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: fact checking, quantitative analyses of news and/or social media, misinformation detection and analysis
Contribution Types: NLP engineering experiment, Data resources, Data analysis, Position papers
Languages Studied: English
Submission Number: 3456
Loading