Radar-Camera Pixel Depth Association for Depth Completion

29 Sept 2021OpenReview Archive Direct UploadReaders: Everyone
Abstract: While radar and video data can be readily fused at the detection level, fusing them at the pixel level is poten- tially more beneficial. This is also more challenging in part due to the sparsity of radar, but also because auto- motive radar beams are much wider than a typical pixel combined with a large baseline between camera and radar, which results in poor association between radar pixels and color pixel. A consequence is that depth completion meth- ods designed for LiDAR and video fare poorly for radar and video. Here we propose a radar-to-pixel association stage which learns a mapping from radar returns to pix- els. This mapping also serves to densify radar returns. Using this as a first stage, followed by a more traditional depth completion method, we are able to achieve image- guided depth completion with radar and video. We demon- strate performance superior to camera and radar alone on the nuScenes dataset. Our source code is available at https://github.com/longyunf/rc-pda.
0 Replies

Loading