Abstract: Separating an image of a scene illuminated by a light source into direct components such as specular reflection and diffuse reflection, and global components such as interreflection and subsurface scattering is important as preprocessing for various computer vision and graphics applications. Conventional methods cannot separate direct and global components from novel viewpoints, and have difficulties in robustly separating those components from a small number of images even from known viewpoints. In this paper, we propose a method for synthesizing the direct and global components of a scene from novel viewpoints by using a relatively small number of images. Specifically, our proposed method uses the multi-view images captured by using a coaxial projector-camera system, and then recovers the density and radiance values of each component on the basis of neural radiance fields (NeRF). We conduct a number of experiments using real images captured with a projector-camera system, and confirm the effectiveness of our method. In addition, we demonstrate that our method is useful for two applications: image-based material editing and 3D shape recovery.
External IDs:dblp:conf/wacv/MatsufujiSKO25
Loading