Stereo Superpixel Segmentation Via Dual-Attention Fusion NetworksDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 06 Nov 2023ICME 2021Readers: Everyone
Abstract: Stereo image pairs can improve performance of many tasks benefiting from the additional information obtained from a second viewpoint when compared with single images. Existing superpixel segmentation algorithms for stereo images mostly adopt single images as input, and neglect the correspondence between the left and right views. In this work, we consider to exploit the depth information between stereo image pairs, and propose an end-to-end dual-attention fusion network for stereo images to generate parallax-consistency superpixels. We first utilize a deep convolution network to extract the deep features of stereo images. Then, to effectively utilize the additional information from the other view, features of the left and right views is integrated by a parallax attention and channel attention mechanism. Finally, the stereo superpixels are generated by a differentiable clustering algorithm, which is end-to-end trainable with deep learning networks. Comprehensive experimental results demonstrate that our method can outperform the state-of-the-art performance on the KITTI2015 and Cityscapes dataset.
0 Replies

Loading