View synthesis with 3D object segmentation-based asynchronous blending and boundary misalignment rectification
Abstract: Numerous depth image-based rendering algorithms have been proposed to synthesize the virtual view for the free viewpoint television. However, inaccuracies in the depth map cause visual artifacts in the virtual view. In this paper, we propose a novel virtual view synthesis framework to create the virtual view of the scene. Here, we incorporate a trilateral depth filter with local texture information, spatial proximity, and color similarity to remove the ghost contours by rectifying the misalignment between the depth map and its associated color image. To further enhance the quality of the synthesized virtual views, we partition the scene into different 3D object segments based on the color image and depth map. Each 3D object segment is warped and blended independently to avoid mixing the pixels belonging to different parts of the scene. The evaluation results indicate that the proposed method significantly improves the quality of the synthesized virtual view compared with other methods and are qualitatively very similar to the ground truth. In addition, it also performs well in real-world scenes.
External IDs:dblp:journals/vc/LiuLFWSY16
Loading