everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
Although the neural radiance field (NeRF) exhibits high-fidelity visualization on the rendering task, it still suffers from rendering defects in complex scenes. One of the primary reasons is the limited model capacity. However, directly increasing the network's width and depth cannot significantly improve the rendering quality. To address this issue, existing work adopts scene partitioning and assigns different 3D points to different network parameters. However, a 3D point may be invisible to some rays due to occlusions in complex scenes. On such a point, training with those rays that do not contain valid information about the point might interfere with the NeRF training. Based on the above intuition, we allocate model parameters in the ray dimension and propose a Gate-guided Mutual Learning framework for neural rendering (GML-NeRF). Specifically, we construct an ensemble of sub-NeRFs and train a soft gate module to assign the gating scores to these sub-NeRFs based on specific rays. The gate module is jointly optimized with the sub-NeRF ensemble, enabling it to learn the preference of sub-NeRFs for different rays automatically. Furthermore, we introduce depth-based mutual learning to enhance the rendering consistency among multiple sub-NeRFs and mitigate the depth ambiguity. Experiments on five diverse datasets demonstrate that GML-NeRF can enhance the rendering performance across a wide range of scene types compared with existing single-NeRF and multi-NeRF methods.