CrediBench : Building Web-Scale Network Datasets for Information Integrity

Published: 23 Sept 2025, Last Modified: 22 Oct 2025NPGML PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: misinformation detection, graph mining, graph learning, temporal graphs
TL;DR: We propose a data curation pipeline for constructing large scale temporal web graphs with both textual content and hyperlink structure for information veracity.
Abstract: Online misinformation poses an escalating threat, amplified by the Internet’s open nature and increasingly capable LLMs that generate persuasive yet deceptive content. Existing misinformation detection methods typically focus on either textual content or network structure in isolation, failing to leverage the rich, dynamic interplay between website content and hyperlink relationships that characterizes real-world misinformation ecosystems. We introduce CrediBench: a large-scale data processing pipeline for constructing temporal web graphs that jointly model textual content and hyperlink structure for misinformation detection. Unlike prior work, our approach captures the dynamic evolution of general misinformation domains, including changes in both content and inter-site references over time. Our processed one-month snapshot extracted from the Common Crawl archive in December 2024 contains 45 million nodes and 1 billion edges, representing the largest web graph dataset made publicly available for misinformation research to date. From our experiments on this graph snapshot, we demonstrate the strength of both structural and webpage content signals for learning credibility scores, which measure source reliability. The code and dataset are available with the submission.
Submission Number: 18
Loading