Resource-Conscious High-Performance Models for 2D-to-3D Single-View ReconstructionDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 03 Mar 2024TENCON 2021Readers: Everyone
Abstract: We propose two transfer learning-based deep neural network architectures for 2D-to-3D single-view image reconstruction, with an emphasis on low computational resources for training and high reconstruction performance. The proposed models, namely AE-Dense and 3D-SkipNet use DenseNet and ResNet architectures in the encoder, with additional skip connections. Through extensive experimental study on the 3D ShapeNets database, we show that the proposed models outperform state-of-the-art models, namely Pix2Vox and 3D-R2N2, in terms of intersection over union (IoU) metric. In particular, the AE-Dense offers the highest IoU, while the 3D-SkipNet yields a significant reduction in memory and training time, compared to Pix2Vox and 3D-R2N2.
0 Replies

Loading