Talk to Parallel LiDARs: A Human-LiDAR Interaction Method Based on 3D Visual Grounding

Published: 11 Aug 2024, Last Modified: 21 Sept 2024ECCV 2024 W-CODA Workshop Full Paper TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Autonomous Driving, Parallel LiDARs, 3D Scene Understanding, 3D Visual Grounding
TL;DR: This paper explores the 3D visual grounding task in autonomous driving and introduce a novel human-LiDAR interaction paradigm for 3D scene understanding.
Subject: 3D object detection and scene understanding
Confirmation: I have read and agree with the submission policies of ECCV 2024 and the W-CODA Workshop on behalf of myself and my co-authors.
Abstract: LiDAR sensors play a crucial role in various applications, especially in autonomous driving. Current research primarily focuses on optimizing perceptual models with point cloud data as input, while the exploration of deeper cognitive intelligence remains relatively limited. To address this challenge, parallel LiDARs have emerged as a novel theoretical framework for the next-generation intelligent LiDAR systems, which tightly integrate physical, digital, and social systems. To endow LiDAR systems with cognitive capabilities, we introduce the 3D visual grounding task into parallel LiDARs and present a novel human-LiDAR interaction paradigm for 3D scene understanding. We propose Talk2LiDAR, a large-scale benchmark dataset for 3D visual grounding in autonomous driving. Additionally, we present a two-stage baseline approach and an efficient one-stage method named BEVGrounding, which significantly improves grounding accuracy by fusing coarse-grained sentences and fine-grained word embeddings with visual features. Our experiments on Talk2Car-3D and Talk2LiDAR datasets demonstrate the superior performance of BEVGrounding, laying a foundation for further research in this domain. We will release all datasets, code, and checkpoints at https://github.com/liuyuhang2021/Talk2LiDAR.
Submission Number: 11
Loading