Detect Low-Resource Rumors in Microblog Posts via Adversarial Contrastive LearningDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Massive false rumors emerging along with breaking news or trending topics severely hinder the truth. Exiting rumor detection approaches achieve promising performance on the yesterday's news, since there is enough corpus collected from the same domain for model training. However, they are poor at detecting rumors about unforeseen events such as COVID-19 due to the lack of training data and prior knowledge (i.e., low-resource rumors). In this paper, we propose an adversarial contrastive learning framework to detect low-resource rumors by adapting the features learned from well-resourced rumor data to that of the low-resourced. Our model explicitly overcomes the restriction of both domain and language usage via language alignment and contrastive training. Moreover, we develop an adversarial augmentation mechanism to further enhance the robustness of low-resource rumor representation. Extensive experiments conducted on two low-resource datasets collected from real-world microblog platforms demonstrate that our framework achieves much better performance than state-of-the-art methods and exhibits a superior capacity for detecting rumors at early stages.
0 Replies

Loading