Keywords: Multimodal Social Interaction, Multimodal Language Models, Video Question Answering
Abstract: Understanding social interaction in video requires reasoning over a dynamic interplay of verbal and non-verbal cues: who is speaking, to whom, and with what gaze or gestures.
While Multimodal Large Language Models (MLLMs) are natural candidates, simply adding visual inputs yields surprisingly inconsistent gains on social tasks.
Our quantitative analysis of cross-modal attention inside state-of-the-art MLLMs reveals a core failure mode: in multi-speaker scenes, visual and textual tokens lack speaker-consistent alignment, exhibiting substantially weaker cross-modal attention than in object-centric images.
To address this, we propose a multimodal multi-speaker attention alignment method that can be integrated into existing MLLMs. First, we introduce dynamic cross-modal head selection to identify attention heads most responsible for grounding.
Then, an adaptive social-aware attention bias, computed from existing attention patterns and speaker locations, is injected into the attention mechanism.
This bias reinforces alignment between a speaker’s visual representation and their utterances without introducing trainable parameters or architectural changes.
Experiments on three datasets (TVQA+, MMSI, and OnlineMMSI) across four social tasks demonstrate that our approach improves the ability of MLLMs and achieves state-of-the-art results on multiple tasks.
Attention visualizations confirm our method successfully focuses the model on speaker-relevant regions, enabling more robust multi-party social reasoning.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 2733
Loading