Mono3D-VLDL: Perception-Aware Vision-Language Dictionary Learning for Multimodal Fusion in Monocular 3D Grounding

Published: 09 Jun 2025, Last Modified: 09 Jun 2025Robo-3Dvlm PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Monocular 3D visual grounding, Human-Computer interaction, Vision-language model
Abstract: We propose Mono3D-VLDL, a novel single-stage framework for language-visual fusion in robotic vision, addressing the limitations of traditional two-stage methods that separately perform image registration and feature fusion. These methods are computationally intensive, hardware-demanding, and struggle with the modality gap between language and visual data, particularly in dynamic environments. Mono3D-VLDL integrates image registration and feature fusion into a unified stage, eliminating explicit registration. The framework employs a Cross-Modality Dictionary to compensate for missing textual information while preserving modality-specific features. Additionally, it uses parallel cross-attention mechanisms to effectively integrate depth, text, and visual information for robust 3D object attribute prediction. Experiments on the Mono3DRefer dataset demonstrate that our method achieves superior efficiency and accuracy compared to existing two-stage approaches, making it highly suitable for real-time robotic applications in resource-constrained settings.
Submission Number: 7
Loading