On Fake News Detection with LLM Enhanced Semantics MiningDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Large language models (LLMs) have emerged as valuable tools for enhancing textual features in various text-related tasks. In this paper, we assess the effectiveness of news embeddings from ChatGPT for detecting fake news and showcase that despite their initial performance slightly surpassing the pre-trained BERT model, they still lag behind the state-of-the-arts. This shortfall is attributed to the reliance on tokenized training text, which misses the complex narratives and subtleties that are crucial for identifying fake news. To capture these nuances, we probe the high-level semantic relations among the news pieces, real entities, and topics, which are modeled as a heterogeneous graph with nodes denoting different items and the relations are represented as edges. We then propose a Generalized Page-Rank model and a consistent learning criteria for mining the local and global semantics centered on each news piece through the adaptive propagation of features across the graph. Our model shows new state-of-the-art performance on five benchmark datasets and the effectiveness of the key ingredients is supported by extensive analysis. Our code is available at \url{https://github.com/LEG4FD/LEG4FD}.
Paper Type: long
Research Area: NLP Applications
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
0 Replies

Loading