Stereo RGB and deeper LiDAR-based network for 3D object detection in autonomous driving
Abstract: 3D object detection has become an emerging task
in autonomous driving scenarios. Most of previous works process
3D point clouds using either projection-based or voxel-based
models. However, both approaches contain some drawbacks.
The voxel-based methods lack semantic information, while the
projection-based methods suffer from numerous spatial informa-
tion loss when projected to different views. In this paper, we pro-
pose the Stereo RGB and Deeper LIDAR (SRDL) framework
which can utilize semantic and spatial information simultaneously
such that the performance of network for 3D object detection can
be improved naturally. Specifically, the network generates candi-
date boxes from stereo pairs and combines different region-wise
features using a deep fusion scheme. The stereo strategy offers
more information for prediction compared with prior works.
Then, several local and global feature extractors are stacked
in the segmentation module to capture richer deep semantic
geometric features from point clouds. After aligning the interior
points with fused features, the proposed network refines the
prediction in a more accurate manner and encodes the whole
box in a novel compact method. The decent experimental results
on the challenging KITTI detection benchmark demonstrate the
effectiveness of utilizing both stereo images and point clouds for
3D object detection.
Loading