Disentangled Rendering Loss for Supervised Material Property Recovery
Abstract: In order to replicate the behavior of real world material using computer graphics, accurate material property maps must be predicted which are used in a pixel-wise multi-variable rendering function. Recent deep learning techniques use the rendered image to obtain the loss on the material property map predictions. While use of rendering loss defined this way results in some improvements in the quality of the predicted renderings, it has problems in recovering the individual property maps accurately. These inaccuracies arise due to the following: i) different property values can collectively generate the same image for limited light and view directions, ii) even correctly predicted property maps get changed because the loss backpropagates gradients to all, and iii) the heuristic chosen for number of light and view samples affects accuracy and computation time. We propose a new loss function, named disentangled rendering loss which addresses the above issues: each predicted property map is used with ground truth maps instead of the other predicted maps, and we solve for the integral of the L1 loss of the specular term over different light and view directions, thus avoiding the need for multiple light and view samples. We show that using our disentangled rendering loss to train the current state of the art network leads to a noticeable increase in the accuracy of recovered material property maps.
0 Replies
Loading