Keywords: collaborative perception, multimodal, modal failure, modality competition
TL;DR: This paper presents a multimodal collaborative perception method capable of functioning effectively even when modal failure occurs or only a single modality is available during the collaboration process.
Abstract: Collaborative perception integrates multi-agent perspectives to enhance the sensing range and overcome occlusion issues. While existing multimodal approaches leverage complementary sensors to improve performance, they are highly prone to failure—especially when a key sensor like LiDAR is unavailable. The root cause is that feature fusion leads to semantic mismatches between single-modality features and the downstream modules. This paper addresses this challenge for the first time in the field of collaborative perception, introducing **Si**ngle-**M**odality-**O**perable Multimodal Collaborative Perception (**SiMO**). By adopting the proposed **L**ength-**A**daptive **M**ulti-**M**od**a**l Fusion (**LAMMA**), SiMO can adaptively handle remaining modal features during modal failures while maintaining consistency of the semantic space. Additionally, leveraging the innovative "Pretrain-Align-Fuse-RD" training strategy, SiMO addresses the issue of modality competition—generally overlooked by existing methods—ensuring the independence of each individual modality branch. Experiments demonstrate that SiMO effectively aligns multimodal features while simultaneously preserving modality-specific features, enabling it to maintain optimal performance across all individual modalities.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 17106
Loading