Abstract: This paper tackles a new photometric stereo task, named
universal photometric stereo. Unlike existing tasks that assumed specific physical lighting models; hence, drastically
limited their usability, a solution algorithm of this task is
supposed to work for objects with diverse shapes and materials under arbitrary lighting variations without assuming any specific models. To solve this extremely challenging
task, we present a purely data-driven method, which eliminates the prior assumption of lighting by replacing the recovery of physical lighting parameters with the extraction
of the generic lighting representation, named global lighting contexts. We use them like lighting parameters in a calibrated photometric stereo network to recover surface normal vectors pixelwisely. To adapt our network to a wide
variety of shapes, materials and lightings, it is trained on
a new synthetic dataset which simulates the appearance of
objects in the wild. Our method is compared with other
state-of-the-art uncalibrated photometric stereo methods on
our test data to demonstrate the significance of our method
0 Replies
Loading