Abstract: The growth of unverified multimodal content on microblogging sites has emerged as a challenging problem in recent times. One major roadblock to this problem is the unavailability of automated tools for rumour detection. Previous work in this field mainly involves rumour detection for textual content only. As per recent studies, the incorporation of multiple modalities (text and image) is provably useful in many tasks since it enhances the understanding of the context. This paper introduces a novel multimodal architecture for rumour detection. It consists of two attention-based BiLSTM neural networks for the generation of text and image feature representations, fused using a cross-modal fusion block and ultimately passing through the rumour detection module. To establish the efficiency of the proposed approach, we extend the existing PHEME-2016 data set by collecting available images and in case of non-availability, additionally downloading new images from the Web. Experiments show that our proposed architecture outperforms state-of-the-art results by a large margin.
0 Replies
Loading