TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News DetectionDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: The proliferation of fake news has emerged as a severe societal problem, raising significant interest from industry and academia. While existing deep-learning based methods have made progress in detecting fake news accurately, their reliability may be compromised caused by the non-transparent reasoning processes, poor generalization abilities and inherent risks of integration with large language models (LLMs). To address this challenge, we propose {\methodname}, a novel framework for trustworthy fake news detection that prioritizes explainability, generalizability and controllability of models. This is achieved via a dual-system framework that integrates cognition and decision systems, adhering to the principles above. The cognition system harnesses human expertise to generate logical predicates, which guide LLMs in generating human-readable logic atoms. Meanwhile, the decision system deduces generalizable logic rules to aggregate these atoms, enabling the identification of the truthfulness of the input news across diverse domains and enhancing transparency in the decision-making process. Finally, we present comprehensive evaluation results on four datasets, demonstrating the feasibility and trustworthiness of our proposed framework.
Paper Type: long
Research Area: NLP Applications
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Preprint Status: There is a non-anonymous preprint (URL specified in the next question).
A1: yes
A1 Elaboration For Yes Or No: In Sec. Limitations.
A2: yes
A2 Elaboration For Yes Or No: In Sec. Ethics Statement.
A3: yes
A3 Elaboration For Yes Or No: In both abstract and introduction.
B: yes
B1: yes
B1 Elaboration For Yes Or No: In Sec. Appendix.
B2: no
B2 Elaboration For Yes Or No: We will discuss the license or terms of any artifacts we utilized in our work in the project page when releasing our cods.
B3: yes
B3 Elaboration For Yes Or No: In Sec. Ethics Statement.
B4: n/a
B4 Elaboration For Yes Or No: The datasets we utilized are public available and do not include sensitive private information and do not pose any harm to society.
B5: n/a
B6: yes
B6 Elaboration For Yes Or No: In Sec. B.1.
C: yes
C1: yes
C1 Elaboration For Yes Or No: In Sec. 4. Our experiments are conducted in four Nvidia 490Ti GPUs. Moreover, we use torch 2.0. The decision system can be trained in several minutes. Furthermore, the number of parameters of our system is the sum of the parameters of LLM utilized in the cognition system and the DNF layer employed in the decision system. While the number of the DNF Layer is negligible, we only report the number of LLM parameters.
C2: yes
C2 Elaboration For Yes Or No: In Sec. B.
C3: no
C3 Elaboration For Yes Or No: To avoid randomness, all experiments are repeated three times with different random seeds, and the average result are reported.
C4: yes
C4 Elaboration For Yes Or No: In Sec. B and Sec. D. We will release the detail of python environment and codes for reproduce and have provided with related files in this submission.
D: no
E: yes
E1: n/a
E1 Elaboration For Yes Or No: Just polish our paper.
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview