Unsupervised Simultaneous Depth-from-defocus and Depth-from-focusDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Depth-from-defocus, Depth-from-focus, Unsupervised learning
Abstract: If the accuracy of depth estimation from a single RGB image could be improved it would be possible to eliminate the need for expensive and bulky depth sensing hardware. The majority of efforts toward this end have been focused on utilizing geometric constraints, image sequences, or stereo image pairs with the help of a deep neural network. In this work, we propose a framework for simultaneous depth estimation from a single image and image focal stacks using depth-from-defocus and depth-from-focus algorithms. The proposed network is able to learn optimal depth mapping from the information contained in the blurring of a single image, generate a simulated image focal stack and all-in-focus image, and train a depth estimator from an image focal stack. As there is no large dataset specifically designed for our problem, we first learned on a synthetic indoor dataset: NYUv2. Then we compare the performance by comparing with other existing methods on DSLR dataset. Finally, we collected our own dataset using a DSLR and further verify on it. Experiments demonstrate that our system is able to provide comparable results compared with other state-of-the-art methods.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: We propose a framework for simultaneous depth estimation from a single image and image focal stacks using depth-from-defocus and depth-from-focus algorithms.
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=BtSVrV0TGI
9 Replies

Loading