Abstract: Neural Radiance Fields (NeRF) has recently overhauled novel view synthesis, but it requires extensive computations for training and captures variations in detail with difficulty. In this paper, we propose a novel framework, termed CD-TDRF, to mitigate these dilemmas. CD-TDRF factorizes a density voxel grid into a core tensor and three matrices via Tucker decomposition, reducing memory usage and accelerating training. To better capture variations in complex scenes, CD-TDRF uses a fully convolutional network to extract prior information from the training images. Moreover, three learnable appearance planes are constructed to preserve information about scene details, which enhances the rendering quality significantly. Our experimental results demonstrate that CD-TDRF has achieved competitive rendering quality on three popular datasets and speeds up training compared with traditional NeRF models.
Loading