Self-Guided Novel View Synthesis via Elastic Displacement Network

Published: 01 Jan 2020, Last Modified: 18 Aug 2025WACV 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Synthesizing a novel view from different viewpoints has been an essential problem in 3D vision. Among a variety of view synthesis tasks, single image based view synthesis is particularly challenging. Recent works address this problem by a fixed number of image planes of discrete disparities, which tend to generate structurally inconsistent results on wide-baseline, scene-complicated datasets such as KITTI. In this paper, we propose the Self-Guided Elastic Displacement Network (SG-EDN), which explicitly models the geometric transformation by a novel non-discrete scene representation called layered displacement maps (LDM). To generate realistic views, we exploit the positional characteristics of the displacement maps and design a multi-scale structural pyramid for self-guided filtering on the displacement maps. To optimize efficiency and scene-adaptivity, we allow the effective range of each displacement map to be `elastic', with fully learnable parameters. Experimental results confirm that our framework outperforms existing methods in both quantitative and qualitative tests.
Loading