Keywords: video stereo matching; 3D Vision
TL;DR: we introduce the use of the memory buffer into the video stereo matching task, aiming to construct a memory buffer that is both compact and informative.
Abstract: Temporally consistent depth estimation from stereo video is critical for real-world applications such as augmented reality, where inconsistent depth estimation disrupts the immersion of users.
Despite its importance, this task remains challenging due to the difficulty in modeling long-term temporal consistency in a computationally efficient manner.
Previous methods attempt to address this by aggregating spatio-temporal information but face a fundamental trade-off: limited temporal modeling provides only modest gains, whereas capturing long-range dependencies significantly increases computational cost.
To address this limitation, we introduce a memory buffer for modeling long-range spatio-temporal consistency while achieving efficient dynamic stereo matching.
Inspired by the two-stage decision-making process in humans, we propose a Pick-and-Play Memory (PPM) construction module for dynamic Stereo matching, dubbed as PPMStereo. PPM consists of a pick process that identifies the most relevant frames and a play process that weights the selected frames adaptively for spatio-temporal aggregation.
This two-stage collaborative process maintains a compact yet highly informative memory buffer while achieving temporally consistent information aggregation.
Extensive experiments validate the effectiveness of PPMStereo, demonstrating state-of-the-art performance in both accuracy and temporal consistency.Codes are available at \textcolor{blue}{https://github.com/cocowy1/PPMStereo}.
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 5026
Loading