Abstract: Evidence-aware fake news detection aims to automatically capture the claim-evidence interaction. In particular, the method aspires to mimic how human fact-checkers verify the veracity of a claim based on a piece of information, facts, or data as evidence that can either support or contradict the claim. However, current evidence-aware approaches, which are mainly constructed of LSTM and Graph Neural Network (GNN), suffer from the lack of high quality data in a low resource scenario of fake news detection. In this study, we want to further investigate the capability of BERT as a backbone architecture for evidence-aware fake news detection task. Based on our experiments, although standard LSTM-based and GNN-based evidence-aware methods excels on English datasets, their performance deteriorates on Indonesian dataset that is composed of extremely imbalanced class representation and scarce evidence representation. In contrast, BERT-based evidence-aware models are able to leverage the data bottleneck issue, suggesting the importance of utilizing pretraining representation and evidence feature in a low resource fake news detection task.
Loading