Depth From Camera Model

24 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Depth Estimation, Camera Model, 3D Reconstruction
Abstract: Depth estimation is pivotal for robotics and vision-centric tasks. In monocular depth estimation, supervised learning methods typically necessitate expensive ground truth labeling. Despite their accuracy, they come with higher costs compared to self-supervised methods. Nevertheless, supervised methods outperform self-supervised ones in depth estimation accuracy. In the era of deep learning, many methods focus on leveraging image relationships to train neural networks. However, the intrinsic and extrinsic properties of the camera, which can offer a wealth of supervisory data, are often overlooked. By tapping into the camera's inherent properties, depth information for ground regions and areas connected to the ground can be deduced based on physical principles. This approach capitalizes on freely available depth prior without the need for additional sensors. It is a straightforward methodology that can be integrated to bolster the efficiency of existing supervised methods.
Primary Area: applications to robotics, autonomy, planning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8536
Loading