Learning Naturally Aggregated Appearance for Efficient 3D Editing

Published: 23 Mar 2025, Last Modified: 24 Mar 20253DV 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Appearance Modelling, Vision Application
TL;DR: Our method learns to naturally aggregated 3D appearance as a 2D canonical image, supporting various 3D editing ways in an efficient manner.
Abstract: Neural radiance fields, which represent a 3D scene as a color field and a density field, have demonstrated great progress in novel view synthesis yet are unfavorable for editing due to the implicitness. This work studies the task of efficient 3D editing, where we focus on **editing speed** and **user interactivity**. To this end, we propose to learn the color field as an explicit 2D appearance aggregation, also called canonical image, with which users can easily customize their 3D editing via 2D image processing. We complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture query. This field is initialized with a pseudo canonical camera model and optimized with offset regularity to ensure the **naturalness** of the canonical image. Extensive experiments on different datasets suggest that our representation, dubbed ***AGAP***, well supports various ways of 3D editing (*e.g.*, stylization, instance segmentation, and interactive drawing). Our approach demonstrates remarkable efficiency by being at least 20$\times$ faster per edit compared to existing NeRF-based editing methods. Project page is available at https://felixcheng97.github.io/AGAP/.
Supplementary Material: zip
Submission Number: 105
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview