ViCA-NeRF: View-Consistency-Aware 3D Editing of Neural Radiance Fields

Published: 21 Sept 2023, Last Modified: 19 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: neural radiance field, diffusion model, editing
TL;DR: View-Consistency-Aware 3D Editing
Abstract: We introduce ViCA-NeRF, the *first* view-consistency-aware method for 3D editing with text instructions. In addition to the implicit neural radiance field (NeRF) modeling, our key insight is to exploit two sources of regularization that *explicitly* propagate the editing information across different views, thus ensuring multi-view consistency. For *geometric regularization*, we leverage the depth information derived from NeRF to establish image correspondences between different views. For *learned regularization*, we align the latent codes in the 2D diffusion model between edited and unedited images, enabling us to edit key views and propagate the update throughout the entire scene. Incorporating these two strategies, our ViCA-NeRF operates in two stages. In the initial stage, we blend edits from different views to create a preliminary 3D edit. This is followed by a second stage of NeRF training, dedicated to further refining the scene's appearance. Experimental results demonstrate that ViCA-NeRF provides more flexible, efficient (3 times faster) editing with higher levels of consistency and details, compared with the state of the art. Our code is available at: https://github.com/Dongjiahua/VICA-NeRF
Supplementary Material: zip
Submission Number: 11585
Loading