Keywords: Multi-camera point tracking, 3D point correspondence, Volumetric attention, Multi-view geometry, 3D trajectory reconstruction, Cross-view matching, Structure from motion
TL;DR: We introduce LAPA, the first end-to-end transformer architecture for multi-camera point tracking that uses volumetric attention to maintain consistent 3D trajectories across views and through occlusions.
Abstract: This paper presents LAPA (Look Around and Pay Attention), a novel end-to-end transformer-based architecture for multi-camera point tracking that integrates appearance-based matching with geometric constraints. Traditional pipelines decouple detection, association, and tracking, leading to error propagation and temporal inconsistency in challenging scenarios. LAPA addresses these limitations by leveraging attention mechanisms to jointly reason across views and time, establishing soft correspondences through a cross-view attention mechanism enhanced with geometric priors. Instead of relying on classical triangulation, we construct 3D point representations via attention-weighted aggregation, inherently accommodating uncertainty and partial observations. Temporal consistency is further maintained through a transformer decoder that models long-range dependencies, preserving identities through extended occlusions. Extensive experiments on challenging datasets, including our newly created multi-camera (MC) versions of TAPVid-3D panoptic and PointOdyssey, demonstrate that our unified approach significantly outperforms existing methods, achieving 37.5\% APD on TAPVid-3D-MC and 90.3\% APD on PointOdyssey-MC, particularly excelling in scenarios with complex motions and occlusions. Code is available https://github.com/ostadabbas/Look-Around-and-Pay-Attention-LAPA.
Supplementary Material: zip
Submission Number: 105
Loading