Abstract: We present the first approach for 3D point-cloud to im-
age translation based on conditional Generative Adversarial Networks
(cGAN). The model handles multi-modal information sources from dif-
ferent domains, i.e. raw point-sets and images. The generator is capable
of processing three conditions, whereas the point-cloud is encoded as raw
point-set and camera projection. An image background patch is used as
constraint to bias environmental texturing. A global approximation func-
tion within the generator is directly applied on the point-cloud (Point-
Net). Hence, the representative learning model incorporates global 3D
characteristics directly at the latent feature space. Conditions are used
to bias the background and the viewpoint of the generated image. This
opens up new ways in augmenting or texturing 3D data to aim the gener-
ation of fully individual images. We successfully evaluated our method on
the KITTI and SunRGBD dataset with an outstanding object detection
inception score.
0 Replies
Loading