Abstract: In this paper, we present a novel layered saliency model for Light Fields (LF) called Fourier Disparity Layer Saliency Estimation (FDLSE). The layers are constructed from the existing Fourier Disparity Layer (FDL) LF representation. Our FDLSE model can be used to predict the visual attention (VA) of any LF rendering with arbitrary viewpoint, aperture and depth-of-focus, without the need to generate the rendered image itself. The proposed model surpasses previous work in the following areas. Our method requires the estimation of the saliency map of only one sub-aperture image instead of the full view array. Furthermore, this model does not require pre-estimated disparity maps, but instead relies on the FDL model whose computation fully takes advantage of GPU parallelisation. Finally, FDLSE shows visual improvements and performs quantitatively on par with our previous FGSE model when evaluated on VA prediction of refocus renderings. To our knowledge these are the only two models which can be used to predict LF VA.
0 Replies
Loading