Abstract: Intrinsic decomposition for 3D scenes from multi-view images is challenging, especially in adverse conditions. We propose a novel event-based intrinsic decomposition framework that leverages events and images for stable decomposition under extreme scenarios. Our method is based on two observations: event cameras maintain good imaging quality, and events from different viewpoints exhibit similarity in diffuse regions while varying in specular regions. We establish an event-based reflectance model and introduce an event-based warping method to extract specular clues. Our two-part framework constructs a radiance field and decomposes the scene into normal, material, and lighting. Experimental results demonstrate superior performance compared to state-of-the-art methods. Our contributions include an event-based reflectance model, event warping-based consistency learning, and a framework for event-based intrinsic decomposition.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: This work makes contributions to multimedia and multimodal processing by pioneering the use of events and images for intrinsic decomposition. As the first method to incorporate events into this task, our approach leverages the complementary information provided by these two modalities to achieve impressive results even under challenging lighting conditions. The multimodal nature of our method demonstrates the power of integrating different data types in complex computer vision tasks. Moreover, our approach enables relighting in extreme conditions, such as dark caves, which has profound implications for applications like digital heritage preservation. By pushing the boundaries of what is possible in multimedia and multimodal processing, our work showcases the practical value and wide-ranging potential of combining events and images for intrinsic decomposition and relighting tasks.
Supplementary Material: zip
Submission Number: 285
Loading