Emotion aided multi-task framework for video embedded misinformation detection

Published: 01 Jan 2024, Last Modified: 11 Jun 2024Multim. Tools Appl. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Online news consumption via social media platforms has accelerated the growth of digital journalism. Adverse to traditional media, digital media has lower entry barriers and allows everyone as a content creator, resulting in numerous fake news productions to attract public attention. As multimedia content is more convenient for users than expressing their feelings through text, images and video-embedded fake news is being circulated rapidly on social media nowadays. Emotional appeal in fake news is also a driving factor in its rapid dissemination. Although prior studies have made a remarkable effort toward fake news detection, they give less emphasis on exploring video modality and emotional appeal in fake news. To bridge this gap, this paper presents the following two contributions: i) It first develops a video-based multimodal fake news detection dataset named FakeClips and ii) It introduces a deep multitask framework dedicated to video-embedded multimodal fake news detection in which fake news detection is the main task and emotion recognition is the auxiliary task. The results reveal that investigating emotion and fake news together in a multitasking framework achieves 9.04% and 5.27% gains in terms of accuracy and f-score, respectively over the state-of-the-art model i.e. Fake Video Detection Model.
Loading