Abstract: We propose an approach to binocular stereo that avoids exhaustive photoconsistency
computations at every pixel, since they are redundant and computationally expensive, especially for high resolution images. We argue that developing scalable stereo algorithms
is critical as image resolution is expected to continue increasing rapidly. Our approach
relies on oversegmentation of the images into superpixels, followed by photoconsistency
computation for only a random subset of the pixels of each superpixel. This generates
sparse reconstructed points which are used to fit planes. Plane hypotheses are propagated
among neighboring superpixels, and they are evaluated at each superpixel by selecting a
random subset of pixels on which to aggregate photoconsistency scores for the competing
planes. We performed extensive tests to characterize the performance of this algorithm
in terms of accuracy and speed on the full-resolution stereo pairs of the 2014 Middlebury
benchmark that contains up to 6-megapixel images. Our results show that very large
computational savings can be achieved at a small loss of accuracy. A multi-threaded implementation of our method 1
is faster than other methods that achieve similar accuracy
and thus it provides a useful accuracy-speed tradeoff.
0 Replies
Loading