Abstract: Most modern deep learning-based multi-view 3D reconstruction techniques use RNNs or fusion modules to combine information from multiple images after independently
encoding them. These two separate steps have loose connections and do not allow easy information sharing among
views. We propose LegoFormer, a transformer model for
voxel-based 3D reconstruction that uses the attention layers to share information among views during all computational stages. Moreover, instead of predicting each voxel
independently, we propose to parametrize the output with
a series of low-rank decomposition factors. This reformulation allows the prediction of an object as a set of independent regular structures then aggregated to obtain the
final reconstruction. Experiments conducted on ShapeNet
demonstrate the competitive performance of our model with
respect to the state of the art while having increased interpretability thanks to the self-attention layers. We also show
promising generalization results to real data.
0 Replies
Loading