Mutual Use of Semantics and Geometry for CNN-Based Object Localization in ToF Images

Published: 01 Jan 2020, Last Modified: 14 Nov 2024ICPR Workshops (2) 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We propose a novel approach to localize a 3D object from the intensity and depth information images provided by a Time-of-Flight (ToF) sensor. Our method builds on two convolutional neural networks (CNNs). The first one uses raw depth and intensity images as input, to segment the floor pixels, from which the extrinsic parameters of the camera are estimated. The second CNN is in charge of segmenting the object-of-interest so as to align its point cloud with a reference model. As a main innovation, the object segmentation exploits the calibration estimated from the prediction of the first CNN to represent the geometric depth information in a coordinate system that is attached to the ground, and is thus independent of the camera elevation. In practice, both the height of pixels with respect to the ground, and the orientation of normals to the point cloud are provided as input to the second CNN.
Loading