Track: Social networks and social media
Keywords: Damage analysis, Social network Analysis, Feature fusion, Multimodal deep learning, Multi-task learning
TL;DR: This work presents the Bidirectional Multi-task Cascaded Multimodal Fusion (BiMCF) method for multimodal damage analysis on social media to improve interactions among different subtasks
Abstract: Damage analysis in social media platforms such as Twitter is a comprehensive problem which involves different subtasks for mining damage-related information from tweets e.g., informativeness, humanitarian categories and severity assessment). The comprehensive information obtained by damage analysis enables to identify breaking events around the world in real-time and hence provides aids in emergency responses. Recently, with the rapid development of web technologies, multimodal damage analysis has received increasing attentions due to users' preference of posting multimodal information in social media. Multimodal damage analysis leverages the associated image modality to improve the identification of damage-related information in social media. However, existing works on multimodal damage analysis address each damage-related subtask individually and do not consider their joint training mechanism. In this work, we propose the Bidirectional Multi-task Cascaded multimodal Fusion (BiMCF) approach towards joint multimodal damage analysis. To this end, we introduce the cascaded multimodal fusion framework to separately integrate effective visual and text information for each task, considering that different tasks attend to different information. To exploit the interactions across tasks, bidirectional propagation of the attended image-text interactive information is implemented between tasks, which can lead to enhanced multimodal fusion. Comprehensive experiments are conducted to validate the effectiveness of the proposed approach.
Submission Number: 1454
Loading