Depth-guided deep filtering network for efficient single image bokeh renderingDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 04 Nov 2023Neural Comput. Appl. 2023Readers: Everyone
Abstract: Bokeh effect is usually used to highlight major contents in an image. Limited by the small sensors, cameras on smartphones are less sensitive to the depth information and cannot directly produce bokeh effect as pleasant as digital single lens reflex cameras. To address this problem, a depth-guided deep filtering network, called DDFN, is proposed in this study. Specifically, the focused region detection block is designed to detect the salient areas, and the depth estimated block is introduced to estimate depth maps from full-focus images. Further, combining depth maps and focused features, an adaptive rendering block is proposed to synthesize bokeh effect with adaptive cross 1-D filters. Both quantitative and qualitative evaluations on the public datasets demonstrate that the proposed model performs favorably against state-of-the-art methods in terms of rendering effects and has lower computational cost, e.g., 24.07 dB PSNR on EBB! dataset and 0.45 s inference times for a $$512 \times 768$$ 512 × 768 image on a Snapdragon 865 mobile processor.
0 Replies

Loading