Abstract: Depth perception is pivotal in many fields, such
as robotics and autonomous driving, to name a few. Conse-
quently, depth sensors such as LiDARs rapidly spread in many
applications. The 3D point clouds generated by these sensors
must often be coupled with an RGB camera to understand
the framed scene semantically. Usually, the former is projected
over the camera image plane, leading to a sparse depth map.
Unfortunately, this process, coupled with the intrinsic issues
affecting all the depth sensors, yields noise and gross outliers
in the final output. Purposely, in this paper, we propose an
effective unsupervised framework aimed at explicitly addressing
this issue by learning to estimate the confidence of the LiDAR
sparse depth map and thus allowing for filtering out the outliers.
Experimental results on the KITTI dataset highlight that our
framework excels for this purpose. Moreover, we demonstrate
how this achievement can improve a wide range of tasks.
0 Replies
Loading