Abstract: Current LiDAR point cloud-based 3D single object tracking (SOT) methods typically rely on point-based representation network. Despite demonstrated success, such networks suffer from some fundamental problems: 1) It contains pooling operation to cope with inherently disordered point clouds, hindering the capture of 3D spatial information that is useful for tracking, a regression task. 2) The adopted set abstraction operation hardly handles density-inconsistent point clouds, also preventing 3D spatial information from being modeled. To solve these problems, we introduce a novel tracking framework, termed VoxelTrack. By voxelizing inherently disordered point clouds into 3D voxels and extracting their features via sparse convolution blocks, VoxelTrack effectively models precise and robust 3D spatial information, thereby guiding accurate position prediction for tracked objects. Moreover, VoxelTrack incorporates a dual-stream encoder with cross-iterative feature fusion module to further explore fine-grained 3D spatial information for tracking. Benefiting from accurate 3D spatial information being modeled, our VoxelTrack simplifies tracking pipeline with a single regression loss. Extensive experiments are conducted on three widely-adopted datasets including KITTI, NuScenes and Waymo Open Dataset. The experimental results confirm that VoxelTrack achieves state-of-the-art performance (88.3%, 71.4% and 63.6% mean precision on the three datasets, respectively), and outperforms the existing trackers with a real-time speed of 36 Fps on a single TITAN RTX GPU.
The source code and model will be released.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: This work introduces a novel voxel representation based 3D single object tracking framework, termed VoxelTrack. The novel framework leverages voxel representation to explore 3D spatial information to guide direct box regression for tracking. Moreover, It incorporates a dual-stream encode with a cross-iterative feature fusion module to further model fine-grained 3D spatial information. Through extensive experiments and analyses, we prove that our proposed VoxelTrack effectively handle disordered and density-inconsistent point clouds, thereby exhibiting the state-of-the-art performance and showing the potential to be used in multimedia applications, such as intelligent tracking and navigation, and reality augmentation.
Supplementary Material: zip
Submission Number: 930
Loading