Abstract: We present a deep learning model, dubbed Glissando-Net, to simultaneously estimate the pose and reconstruct the 3D
shape of objects at the category level from a single RGB image. Previous works predominantly focused on either estimating poses
(often at the instance level), or reconstructing shapes, but not both. Glissando-Net is composed of two auto-encoders that are jointly
trained, one for RGB images and the other for point clouds. We embrace two key design choices in Glissando-Net to achieve a more
accurate prediction of the 3D shape and pose of the object given a single RGB image as input. First, we augment the feature maps of
the point cloud encoder and decoder with transformed feature maps from the image decoder, enabling effective 2D-3D interaction in
both training and prediction. Second, we predict both the 3D shape and pose of the object in the decoder stage. This way, we better
utilize the information in the 3D point clouds presented only in the training stage to train the network for more accurate prediction. We
jointly train the two encoder-decoders for RGB and point cloud data to learn how to pass latent features to the point cloud decoder
during inference. In testing, the encoder of the 3D point cloud is discarded. The design of Glissando-Net is inspired by codeSLAM.
Unlike codeSLAM, which targets 3D reconstruction of scenes, we focus on pose estimation and shape reconstruction of objects, and
directly predict the object pose and a pose invariant 3D reconstruction without the need of the code optimization step. Extensive
experiments, involving both ablation studies and comparison with competing methods, demonstrate the efficacy of our proposed
method, and compare favorably with the state-of-the-art.
0 Replies
Loading