Abstract: Multimodal news event detection aims to identify and categorize significant events across media platforms using multimodal data. Previous work was limited to a single platform and assumed complete multimodal data. In this paper, we explore a novel task of cross-platform multimodal news event detection to enhance model generalization for cross-platform scenarios. We propose a Self-Supervised Modality Complementation (SSMC) method to tackle the challenges of incomplete modalities and platform heterogeneity presented in this task. Specifically, a Missing Data Complementation (MDC) module is designed to overcome the limitations caused by incomplete modalities. It employs a separation mechanism that distinguishes between modality-specific and modality-shared features across all modalities, allowing for the augmentation of missing modalities with information extracted from common features. Meanwhile, a Multimodal Self-Learning (MSL) module addresses platform heterogeneity by extracting pseudo labels from the target platform’s multimodal views and incorporating a self-penalization mechanism to reduce reliance on low-confidence labels. Additionally, we collect a comprehensive cross-platform news event detection (CNED) dataset encompassing 37,711 multimodal samples from Twitter, Flickr, and online news media, covering 40 public news events verified by Wikipedia. Extensive experiments on the CNED dataset demonstrate the superior performance of our proposed method.
External IDs:dblp:journals/tkde/LinYL25
Loading