Keywords: 3D Recontrustion, Relighting, multiview diffusion
TL;DR: We reconstruct objects captured in environments with extremely different illuminations, by relighting the views consistently then reconstructing the object.
Abstract: Reconstructing the geometry and appearance of objects from photographs taken in different environments is difficult as the illumination and, therefore, the object appearance vary across captured images. This is particularly challenging for specular objects whose appearance strongly depends on the viewing direction. Some prior approaches model appearance variation across images using a per-image embedding vector, while others use physically-based rendering to recover the materials and per-image illumination. Such approaches fail at faithfully recovering view-dependent appearance given the significant variation in input illumination and tend to produce mostly diffuse results. We present an approach that reconstructs objects from images taken under different illuminations by first relighting the images under a single reference illumination with a multiview relighting diffusion model and then reconstructing the object's geometry and appearance with a radiance field architecture that is robust to the minor remaining inconsistencies among the relit images.
We validate our approach on synthetic and real datasets and demonstrate that it outperforms existing techniques at reconstructing high-fidelity appearance from images taken under extreme illumination variation. Moreover, our approach is particularly effective at recovering view-dependent ``shiny'' appearance which cannot be reconstructed by prior methods.
Supplementary Material: pdf
Submission Number: 13
Loading