Towards Depth-Continuous Scene Representation With a Displacement Field for Robust Light Field Depth Estimation

Published: 01 Jan 2025, Last Modified: 25 Jul 2025IEEE Trans. Multim. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Light field (LF) captures both spatial and angular information of scenes, enabling accurate depth estimation. However, previous deep learning methods have typically model surface depth only, while ignoring the continuous nature of depth in 3D scenes. In this paper, we use displacement field (DF) to describe this continuous property, and propose a novel depth-continuous scene representation for robust LF depth estimation. Experiments demonstrate that our representation enables the network to generate highly detailed depth maps with fewer parameters and faster speed. Specifically, inspired by signed distance field in 3D object description, we aim to exploit the intrinsic depth-continuous property of 3D scenes using DF, and define a novel depth-continuous scene representation. Then, we introduce a simple yet general learning framework for depth-continuous scene embedding, and the proposed network, DepthDF, achieves state-of-the-art performance on both synthetic and real-world LF datasets, ranking 1st on the HCI 4D Light Field benchmark. Furthermore, previous LF depth estimation methods can also be seamlessly integrated into this framework. Finally, we extend this framework beyond LF depth estimation to various tasks, including multi-view stereo depth inference, LF super-resolution, and LF salient object detection. Experiments demonstrate improved performance when the continuous scene representation is applied, suggesting that our framework can potentially bring insights to more fields.
Loading