Zero-Shot Character Identification and Speaker Prediction in Comics via Iterative Multimodal Fusion

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recognizing characters and predicting speakers of dialogue are critical for comic processing tasks, such as voice generation or translation. However, because characters vary by comic title, supervised learning approaches like training character classifiers which require specific annotations for each comic title are infeasible. This motivates us to propose a novel zero-shot approach, allowing machines to identify characters and predict speaker names based solely on unannotated comic images. In spite of their importance in real-world applications, these task have largely remained unexplored due to challenges in story comprehension and multimodal integration. Recent large language models (LLMs) have shown great capability for text understanding and reasoning, while their application to multimodal content analysis is still an open problem. To address this problem, we propose an iterative multimodal framework, the first to employ multimodal information for both character identification and speaker prediction tasks. Our experiments demonstrate the effectiveness of the proposed framework, establishing a robust baseline for these tasks. Furthermore, since our method requires no training data or annotations, it can be used as-is on any comic series.
Primary Subject Area: [Content] Multimodal Fusion
Secondary Subject Area: [Generation] Multimedia Foundation Models, [Content] Media Interpretation, [Content] Vision and Language
Relevance To Conference: Our contribution to the ACM Multimedia conference lies in the introduction and tackling of a novel task: zero-shot character identification and speaker prediction in comics, a first in multimodal processing that leverages comics' textual, visual, and sequential storytelling elements—without relying on annotated data specific to each comic series. Uniquely, we are the pioneers in merging foundation models (LLMs) with multimodal data to solve this new challenge. This approach not only sets a new benchmark for zero-shot learning in the realm of multimodal processing but also exemplifies the innovative application of foundation models beyond their conventional text-based domains. Our work thus offers significant insights and a robust methodology for the broader multimedia research community, highlighting the expansive utility of LLMs when adeptly integrated with other modalities for complex understanding and prediction tasks.
Supplementary Material: zip
Submission Number: 5377
Loading